leaderboard-pr-bot commited on
Commit
a27af31
1 Parent(s): c06e1a6

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -17,4 +17,17 @@ A Mistral 7B finetuned model using SlimOrca, Auroboros 3.1 and RiddleSense.
17
 
18
  ### Training
19
 
20
- Trained for 4 epochs, but released @ epoch 3.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ### Training
19
 
20
+ Trained for 4 epochs, but released @ epoch 3.
21
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
22
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__mistral-7b-slimorcaboros)
23
+
24
+ | Metric | Value |
25
+ |-----------------------|---------------------------|
26
+ | Avg. | 54.1 |
27
+ | ARC (25-shot) | 63.65 |
28
+ | HellaSwag (10-shot) | 83.7 |
29
+ | MMLU (5-shot) | 63.46 |
30
+ | TruthfulQA (0-shot) | 55.81 |
31
+ | Winogrande (5-shot) | 77.03 |
32
+ | GSM8K (5-shot) | 23.43 |
33
+ | DROP (3-shot) | 11.62 |