Commit
cf85301
1 Parent(s): b78a538

Adding Evaluation Results (#2)

Browse files

- Adding Evaluation Results (53b0d90691352b6cb78716e67d78bf2b0c02629d)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +112 -4
README.md CHANGED
@@ -1,11 +1,106 @@
1
  ---
2
- base_model:
3
- - mistralai/Mistral-7B-Instruct-v0.2
4
  library_name: transformers
5
  tags:
6
  - mergekit
7
  - merge
8
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  # bigstral-12b-32k
11
 
@@ -115,4 +210,17 @@ slices:
115
  - layer_range: [24, 32]
116
  model: mistralai/Mistral-7B-Instruct-v0.2
117
 
118
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
+ base_model:
8
+ - mistralai/Mistral-7B-Instruct-v0.2
9
+ model-index:
10
+ - name: bigstral-12b-32k
11
+ results:
12
+ - task:
13
+ type: text-generation
14
+ name: Text Generation
15
+ dataset:
16
+ name: IFEval (0-Shot)
17
+ type: HuggingFaceH4/ifeval
18
+ args:
19
+ num_few_shot: 0
20
+ metrics:
21
+ - type: inst_level_strict_acc and prompt_level_strict_acc
22
+ value: 41.94
23
+ name: strict accuracy
24
+ source:
25
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
26
+ name: Open LLM Leaderboard
27
+ - task:
28
+ type: text-generation
29
+ name: Text Generation
30
+ dataset:
31
+ name: BBH (3-Shot)
32
+ type: BBH
33
+ args:
34
+ num_few_shot: 3
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 25.56
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MATH Lvl 5 (4-Shot)
47
+ type: hendrycks/competition_math
48
+ args:
49
+ num_few_shot: 4
50
+ metrics:
51
+ - type: exact_match
52
+ value: 0.98
53
+ name: exact match
54
+ source:
55
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
56
+ name: Open LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: GPQA (0-shot)
62
+ type: Idavidrein/gpqa
63
+ args:
64
+ num_few_shot: 0
65
+ metrics:
66
+ - type: acc_norm
67
+ value: 5.7
68
+ name: acc_norm
69
+ source:
70
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
71
+ name: Open LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: MuSR (0-shot)
77
+ type: TAUR-Lab/MuSR
78
+ args:
79
+ num_few_shot: 0
80
+ metrics:
81
+ - type: acc_norm
82
+ value: 15.86
83
+ name: acc_norm
84
+ source:
85
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
86
+ name: Open LLM Leaderboard
87
+ - task:
88
+ type: text-generation
89
+ name: Text Generation
90
+ dataset:
91
+ name: MMLU-PRO (5-shot)
92
+ type: TIGER-Lab/MMLU-Pro
93
+ config: main
94
+ split: test
95
+ args:
96
+ num_few_shot: 5
97
+ metrics:
98
+ - type: acc
99
+ value: 18.24
100
+ name: accuracy
101
+ source:
102
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=abacusai/bigstral-12b-32k
103
+ name: Open LLM Leaderboard
104
  ---
105
  # bigstral-12b-32k
106
 
 
210
  - layer_range: [24, 32]
211
  model: mistralai/Mistral-7B-Instruct-v0.2
212
 
213
+ ```
214
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
215
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abacusai__bigstral-12b-32k)
216
+
217
+ | Metric |Value|
218
+ |-------------------|----:|
219
+ |Avg. |18.05|
220
+ |IFEval (0-Shot) |41.94|
221
+ |BBH (3-Shot) |25.56|
222
+ |MATH Lvl 5 (4-Shot)| 0.98|
223
+ |GPQA (0-shot) | 5.70|
224
+ |MuSR (0-shot) |15.86|
225
+ |MMLU-PRO (5-shot) |18.24|
226
+