Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -229,3 +229,17 @@ NeuralMagic FP8 Quants: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-7
|
|
229 |
author={"Teknium", "theemozilla", "Chen Guang", "interstellarninja", "karan4d", "huemin_art"}
|
230 |
}
|
231 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
229 |
author={"Teknium", "theemozilla", "Chen Guang", "interstellarninja", "karan4d", "huemin_art"}
|
230 |
}
|
231 |
```
|
232 |
+
|
233 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
234 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Hermes-3-Llama-3.1-70B)
|
235 |
+
|
236 |
+
| Metric |Value|
|
237 |
+
|-------------------|----:|
|
238 |
+
|Avg. |31.79|
|
239 |
+
|IFEval (0-Shot) |30.81|
|
240 |
+
|BBH (3-Shot) |54.24|
|
241 |
+
|MATH Lvl 5 (4-Shot)|22.89|
|
242 |
+
|GPQA (0-shot) |17.11|
|
243 |
+
|MuSR (0-shot) |24.35|
|
244 |
+
|MMLU-PRO (5-shot) |41.31|
|
245 |
+
|