Commit
315d167
1 Parent(s): 7dde610

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (a8fd8198000c8d726faef8e9cfef8a40197ba7b5)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +27 -19
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
- base_model:
3
- - uukuguy/speechless-code-mistral-7b-v1.0
4
- - upaya07/Arithmo2-Mistral-7B
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
- license: apache-2.0
 
 
 
10
  model-index:
11
  - name: sethuiyer/CodeCalc-Mistral-7B
12
  results:
@@ -25,8 +28,7 @@ model-index:
25
  value: 61.95
26
  name: normalized accuracy
27
  source:
28
- url: >-
29
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
30
  name: Open LLM Leaderboard
31
  - task:
32
  type: text-generation
@@ -42,8 +44,7 @@ model-index:
42
  value: 83.64
43
  name: normalized accuracy
44
  source:
45
- url: >-
46
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
47
  name: Open LLM Leaderboard
48
  - task:
49
  type: text-generation
@@ -60,8 +61,7 @@ model-index:
60
  value: 62.78
61
  name: accuracy
62
  source:
63
- url: >-
64
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
65
  name: Open LLM Leaderboard
66
  - task:
67
  type: text-generation
@@ -77,8 +77,7 @@ model-index:
77
  - type: mc2
78
  value: 47.49
79
  source:
80
- url: >-
81
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
82
  name: Open LLM Leaderboard
83
  - task:
84
  type: text-generation
@@ -95,8 +94,7 @@ model-index:
95
  value: 78.3
96
  name: accuracy
97
  source:
98
- url: >-
99
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
100
  name: Open LLM Leaderboard
101
  - task:
102
  type: text-generation
@@ -113,12 +111,8 @@ model-index:
113
  value: 63.53
114
  name: accuracy
115
  source:
116
- url: >-
117
- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
118
  name: Open LLM Leaderboard
119
- language:
120
- - en
121
- pipeline_tag: text-generation
122
  ---
123
  # CodeCalc-Mistral-7B
124
 
@@ -180,3 +174,17 @@ repetition_penalty: 1.17
180
  top_k: 49
181
  ```
182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+ base_model:
10
+ - uukuguy/speechless-code-mistral-7b-v1.0
11
+ - upaya07/Arithmo2-Mistral-7B
12
+ pipeline_tag: text-generation
13
  model-index:
14
  - name: sethuiyer/CodeCalc-Mistral-7B
15
  results:
 
28
  value: 61.95
29
  name: normalized accuracy
30
  source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
32
  name: Open LLM Leaderboard
33
  - task:
34
  type: text-generation
 
44
  value: 83.64
45
  name: normalized accuracy
46
  source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
48
  name: Open LLM Leaderboard
49
  - task:
50
  type: text-generation
 
61
  value: 62.78
62
  name: accuracy
63
  source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
65
  name: Open LLM Leaderboard
66
  - task:
67
  type: text-generation
 
77
  - type: mc2
78
  value: 47.49
79
  source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
81
  name: Open LLM Leaderboard
82
  - task:
83
  type: text-generation
 
94
  value: 78.3
95
  name: accuracy
96
  source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
98
  name: Open LLM Leaderboard
99
  - task:
100
  type: text-generation
 
111
  value: 63.53
112
  name: accuracy
113
  source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
 
115
  name: Open LLM Leaderboard
 
 
 
116
  ---
117
  # CodeCalc-Mistral-7B
118
 
 
174
  top_k: 49
175
  ```
176
 
177
+
178
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
179
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__CodeCalc-Mistral-7B)
180
+
181
+ | Metric |Value|
182
+ |---------------------------------|----:|
183
+ |Avg. |66.33|
184
+ |AI2 Reasoning Challenge (25-Shot)|61.95|
185
+ |HellaSwag (10-Shot) |83.64|
186
+ |MMLU (5-Shot) |62.78|
187
+ |TruthfulQA (0-shot) |47.79|
188
+ |Winogrande (5-shot) |78.30|
189
+ |GSM8k (5-shot) |63.53|
190
+