leaderboard-pr-bot commited on
Commit
c3bc3d0
1 Parent(s): a78a99b

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +119 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
  - openchat
6
  - mistral
@@ -15,8 +15,111 @@ datasets:
15
  - meta-math/MetaMathQA
16
  - OpenAssistant/oasst_top1_2023-08-25
17
  - TIGER-Lab/MathInstruct
18
- library_name: transformers
19
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
  <div align="center">
22
  <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
@@ -315,3 +418,17 @@ OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-
315
 
316
  * Wang Guan [[email protected]], Cheng Sijie [[email protected]], Alpay Ariyak [[email protected]]
317
  * We look forward to hearing you and collaborating on this exciting project!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
  tags:
5
  - openchat
6
  - mistral
 
15
  - meta-math/MetaMathQA
16
  - OpenAssistant/oasst_top1_2023-08-25
17
  - TIGER-Lab/MathInstruct
18
+ base_model: mistralai/Mistral-7B-v0.1
19
  pipeline_tag: text-generation
20
+ model-index:
21
+ - name: openchat-3.5-1210
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: AI2 Reasoning Challenge (25-Shot)
28
+ type: ai2_arc
29
+ config: ARC-Challenge
30
+ split: test
31
+ args:
32
+ num_few_shot: 25
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 64.93
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: HellaSwag (10-Shot)
45
+ type: hellaswag
46
+ split: validation
47
+ args:
48
+ num_few_shot: 10
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 84.92
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MMLU (5-Shot)
61
+ type: cais/mmlu
62
+ config: all
63
+ split: test
64
+ args:
65
+ num_few_shot: 5
66
+ metrics:
67
+ - type: acc
68
+ value: 64.62
69
+ name: accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: TruthfulQA (0-shot)
78
+ type: truthful_qa
79
+ config: multiple_choice
80
+ split: validation
81
+ args:
82
+ num_few_shot: 0
83
+ metrics:
84
+ - type: mc2
85
+ value: 52.15
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: Winogrande (5-shot)
94
+ type: winogrande
95
+ config: winogrande_xl
96
+ split: validation
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 80.74
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: GSM8k (5-shot)
111
+ type: gsm8k
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 65.96
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=openchat/openchat-3.5-1210
122
+ name: Open LLM Leaderboard
123
  ---
124
  <div align="center">
125
  <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
 
418
 
419
  * Wang Guan [[email protected]], Cheng Sijie [[email protected]], Alpay Ariyak [[email protected]]
420
  * We look forward to hearing you and collaborating on this exciting project!
421
+
422
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
423
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat-3.5-1210)
424
+
425
+ | Metric |Value|
426
+ |---------------------------------|----:|
427
+ |Avg. |68.89|
428
+ |AI2 Reasoning Challenge (25-Shot)|64.93|
429
+ |HellaSwag (10-Shot) |84.92|
430
+ |MMLU (5-Shot) |64.62|
431
+ |TruthfulQA (0-shot) |52.15|
432
+ |Winogrande (5-shot) |80.74|
433
+ |GSM8k (5-shot) |65.96|
434
+