jon-tow leaderboard-pr-bot commited on
Commit
fa4a6a9
1 Parent(s): 77fd07d

Adding Evaluation Results (#12)

Browse files

- Adding Evaluation Results (610e7bd4cf627f2fb69927b81e80a8cf108eba61)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +121 -4
README.md CHANGED
@@ -1,15 +1,118 @@
1
  ---
 
 
2
  license: cc-by-sa-4.0
 
 
3
  datasets:
4
  - tiiuae/falcon-refinedweb
5
  - togethercomputer/RedPajama-Data-1T
6
  - CarperAI/pilev2-dev
7
  - bigcode/starcoderdata
8
  - allenai/peS2o
9
- language:
10
- - en
11
- tags:
12
- - causal-lm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
  # `StableLM-3B-4E1T`
15
 
@@ -128,3 +231,17 @@ As a base model, this model may exhibit unreliable, unsafe, or other undesirable
128
  author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
129
  }
130
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-sa-4.0
5
+ tags:
6
+ - causal-lm
7
  datasets:
8
  - tiiuae/falcon-refinedweb
9
  - togethercomputer/RedPajama-Data-1T
10
  - CarperAI/pilev2-dev
11
  - bigcode/starcoderdata
12
  - allenai/peS2o
13
+ model-index:
14
+ - name: stablelm-3b-4e1t
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 46.59
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 75.94
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 45.23
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 37.2
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 71.19
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 3.34
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=stabilityai/stablelm-3b-4e1t
115
+ name: Open LLM Leaderboard
116
  ---
117
  # `StableLM-3B-4E1T`
118
 
 
231
  author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
232
  }
233
  ```
234
+
235
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
236
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__stablelm-3b-4e1t)
237
+
238
+ | Metric |Value|
239
+ |---------------------------------|----:|
240
+ |Avg. |46.58|
241
+ |AI2 Reasoning Challenge (25-Shot)|46.59|
242
+ |HellaSwag (10-Shot) |75.94|
243
+ |MMLU (5-Shot) |45.23|
244
+ |TruthfulQA (0-shot) |37.20|
245
+ |Winogrande (5-shot) |71.19|
246
+ |GSM8k (5-shot) | 3.34|
247
+