Commit
4818ea2
1 Parent(s): 39c6757

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (3b4851b576a49a8008def1cca3392b7bee07267a)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +122 -6
README.md CHANGED
@@ -1,13 +1,116 @@
1
  ---
2
- base_model:
3
- - Dampfinchen/Llama-3-8B-Ultra-Instruct
4
- - NousResearch/Meta-Llama-3-8B
5
- - NousResearch/Meta-Llama-3-8B-Instruct
6
  library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
10
- license: llama3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
  # merge
13
 
@@ -44,4 +147,17 @@ base_model: NousResearch/Meta-Llama-3-8B
44
  dtype: bfloat16
45
  ```
46
 
47
- Test of salt sprinkle methode. The goal is to retain all of L3 Instruct's capabilities while adding better RP, RAG, German and story writing capabilities in the form of Ultra Instruct. Model may generate harmful responses, I'm not responsible for what you do with this model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama3
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
+ base_model:
8
+ - Dampfinchen/Llama-3-8B-Ultra-Instruct
9
+ - NousResearch/Meta-Llama-3-8B
10
+ - NousResearch/Meta-Llama-3-8B-Instruct
11
+ model-index:
12
+ - name: Llama-3-8B-Ultra-Instruct-SaltSprinkle
13
+ results:
14
+ - task:
15
+ type: text-generation
16
+ name: Text Generation
17
+ dataset:
18
+ name: AI2 Reasoning Challenge (25-Shot)
19
+ type: ai2_arc
20
+ config: ARC-Challenge
21
+ split: test
22
+ args:
23
+ num_few_shot: 25
24
+ metrics:
25
+ - type: acc_norm
26
+ value: 61.35
27
+ name: normalized accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: HellaSwag (10-Shot)
36
+ type: hellaswag
37
+ split: validation
38
+ args:
39
+ num_few_shot: 10
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 77.76
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MMLU (5-Shot)
52
+ type: cais/mmlu
53
+ config: all
54
+ split: test
55
+ args:
56
+ num_few_shot: 5
57
+ metrics:
58
+ - type: acc
59
+ value: 67.88
60
+ name: accuracy
61
+ source:
62
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: TruthfulQA (0-shot)
69
+ type: truthful_qa
70
+ config: multiple_choice
71
+ split: validation
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: mc2
76
+ value: 52.82
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: Winogrande (5-shot)
85
+ type: winogrande
86
+ config: winogrande_xl
87
+ split: validation
88
+ args:
89
+ num_few_shot: 5
90
+ metrics:
91
+ - type: acc
92
+ value: 74.98
93
+ name: accuracy
94
+ source:
95
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: GSM8k (5-shot)
102
+ type: gsm8k
103
+ config: main
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 70.89
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
113
+ name: Open LLM Leaderboard
114
  ---
115
  # merge
116
 
 
147
  dtype: bfloat16
148
  ```
149
 
150
+ Test of salt sprinkle methode. The goal is to retain all of L3 Instruct's capabilities while adding better RP, RAG, German and story writing capabilities in the form of Ultra Instruct. Model may generate harmful responses, I'm not responsible for what you do with this model.
151
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
152
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Dampfinchen__Llama-3-8B-Ultra-Instruct-SaltSprinkle)
153
+
154
+ | Metric |Value|
155
+ |---------------------------------|----:|
156
+ |Avg. |67.61|
157
+ |AI2 Reasoning Challenge (25-Shot)|61.35|
158
+ |HellaSwag (10-Shot) |77.76|
159
+ |MMLU (5-Shot) |67.88|
160
+ |TruthfulQA (0-shot) |52.82|
161
+ |Winogrande (5-shot) |74.98|
162
+ |GSM8k (5-shot) |70.89|
163
+