Adding Evaluation Results

#20
Files changed (1) hide show
  1. README.md +115 -7
README.md CHANGED
@@ -1,14 +1,109 @@
1
  ---
2
- license: other
3
- license_name: qwen
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
  language:
6
  - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-72B
9
  tags:
10
  - chat
11
- library_name: transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Qwen2.5-72B-Instruct
@@ -130,4 +225,17 @@ If you find our work helpful, feel free to give us a cite.
130
  journal={arXiv preprint arXiv:2407.10671},
131
  year={2024}
132
  }
133
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: other
5
+ library_name: transformers
6
  tags:
7
  - chat
8
+ base_model: Qwen/Qwen2.5-72B
9
+ license_name: qwen
10
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
11
+ pipeline_tag: text-generation
12
+ model-index:
13
+ - name: Qwen2.5-72B-Instruct
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: IFEval (0-Shot)
20
+ type: HuggingFaceH4/ifeval
21
+ args:
22
+ num_few_shot: 0
23
+ metrics:
24
+ - type: inst_level_strict_acc and prompt_level_strict_acc
25
+ value: 86.38
26
+ name: strict accuracy
27
+ source:
28
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
29
+ name: Open LLM Leaderboard
30
+ - task:
31
+ type: text-generation
32
+ name: Text Generation
33
+ dataset:
34
+ name: BBH (3-Shot)
35
+ type: BBH
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 61.87
41
+ name: normalized accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
44
+ name: Open LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: MATH Lvl 5 (4-Shot)
50
+ type: hendrycks/competition_math
51
+ args:
52
+ num_few_shot: 4
53
+ metrics:
54
+ - type: exact_match
55
+ value: 1.21
56
+ name: exact match
57
+ source:
58
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
59
+ name: Open LLM Leaderboard
60
+ - task:
61
+ type: text-generation
62
+ name: Text Generation
63
+ dataset:
64
+ name: GPQA (0-shot)
65
+ type: Idavidrein/gpqa
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: acc_norm
70
+ value: 16.67
71
+ name: acc_norm
72
+ source:
73
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: MuSR (0-shot)
80
+ type: TAUR-Lab/MuSR
81
+ args:
82
+ num_few_shot: 0
83
+ metrics:
84
+ - type: acc_norm
85
+ value: 11.74
86
+ name: acc_norm
87
+ source:
88
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
89
+ name: Open LLM Leaderboard
90
+ - task:
91
+ type: text-generation
92
+ name: Text Generation
93
+ dataset:
94
+ name: MMLU-PRO (5-shot)
95
+ type: TIGER-Lab/MMLU-Pro
96
+ config: main
97
+ split: test
98
+ args:
99
+ num_few_shot: 5
100
+ metrics:
101
+ - type: acc
102
+ value: 51.4
103
+ name: accuracy
104
+ source:
105
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Qwen/Qwen2.5-72B-Instruct
106
+ name: Open LLM Leaderboard
107
  ---
108
 
109
  # Qwen2.5-72B-Instruct
 
225
  journal={arXiv preprint arXiv:2407.10671},
226
  year={2024}
227
  }
228
+ ```
229
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
230
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen2.5-72B-Instruct)
231
+
232
+ | Metric |Value|
233
+ |-------------------|----:|
234
+ |Avg. |38.21|
235
+ |IFEval (0-Shot) |86.38|
236
+ |BBH (3-Shot) |61.87|
237
+ |MATH Lvl 5 (4-Shot)| 1.21|
238
+ |GPQA (0-shot) |16.67|
239
+ |MuSR (0-shot) |11.74|
240
+ |MMLU-PRO (5-shot) |51.40|
241
+