Adding Evaluation Results
#11
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
3 |
tags:
|
4 |
- yi
|
5 |
- instruct
|
@@ -8,14 +10,12 @@ tags:
|
|
8 |
- gpt4
|
9 |
- synthetic data
|
10 |
- distillation
|
|
|
|
|
|
|
11 |
model-index:
|
12 |
- name: Nous-Hermes-2-Yi-34B
|
13 |
results: []
|
14 |
-
license: apache-2.0
|
15 |
-
language:
|
16 |
-
- en
|
17 |
-
datasets:
|
18 |
-
- teknium/OpenHermes-2.5
|
19 |
---
|
20 |
|
21 |
# Nous Hermes 2 - Yi-34B
|
@@ -212,3 +212,17 @@ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
|
|
212 |
GGUF: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B-GGUF
|
213 |
|
214 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: apache-2.0
|
5 |
tags:
|
6 |
- yi
|
7 |
- instruct
|
|
|
10 |
- gpt4
|
11 |
- synthetic data
|
12 |
- distillation
|
13 |
+
datasets:
|
14 |
+
- teknium/OpenHermes-2.5
|
15 |
+
base_model: 01-ai/Yi-34B
|
16 |
model-index:
|
17 |
- name: Nous-Hermes-2-Yi-34B
|
18 |
results: []
|
|
|
|
|
|
|
|
|
|
|
19 |
---
|
20 |
|
21 |
# Nous Hermes 2 - Yi-34B
|
|
|
212 |
GGUF: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B-GGUF
|
213 |
|
214 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
215 |
+
|
216 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
217 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Hermes-2-Yi-34B)
|
218 |
+
|
219 |
+
| Metric |Value|
|
220 |
+
|---------------------------------|----:|
|
221 |
+
|Avg. |73.74|
|
222 |
+
|AI2 Reasoning Challenge (25-Shot)|66.89|
|
223 |
+
|HellaSwag (10-Shot) |85.49|
|
224 |
+
|MMLU (5-Shot) |76.70|
|
225 |
+
|TruthfulQA (0-shot) |60.37|
|
226 |
+
|Winogrande (5-shot) |82.95|
|
227 |
+
|GSM8k (5-shot) |70.05|
|
228 |
+
|