leaderboard-pr-bot commited on
Commit
61383d6
1 Parent(s): 24da9e8

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -58,4 +58,17 @@ Also thanks to Meta for LLaMA.
58
 
59
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
60
  Thanks to each and every one of you for your incredible work developing some of the best things
61
- to come out of this community.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
60
  Thanks to each and every one of you for your incredible work developing some of the best things
61
+ to come out of this community.
62
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
63
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CalderaAI__30B-Lazarus)
64
+
65
+ | Metric | Value |
66
+ |-----------------------|---------------------------|
67
+ | Avg. | 53.33 |
68
+ | ARC (25-shot) | 64.93 |
69
+ | HellaSwag (10-shot) | 84.27 |
70
+ | MMLU (5-shot) | 56.47 |
71
+ | TruthfulQA (0-shot) | 58.65 |
72
+ | Winogrande (5-shot) | 78.37 |
73
+ | GSM8K (5-shot) | 7.73 |
74
+ | DROP (3-shot) | 22.9 |