Text Generation
Transformers
PyTorch
Safetensors
English
llama
text-generation-inference
Inference Endpoints
Commit
01d38bc
1 Parent(s): f378a4f

Adding Evaluation Results (#4)

Browse files

- Adding Evaluation Results (cdd899e53be6c448a8625770739c0408b08714d5)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -116,4 +116,17 @@ Please see the Responsible Use Guide available at https://ai.meta.com/llama/resp
116
  year={2022},
117
  url={https://openreview.net/forum?id=nZeVKeeFYf9}
118
  }
119
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
  year={2022},
117
  url={https://openreview.net/forum?id=nZeVKeeFYf9}
118
  }
119
+ ```
120
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
121
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_garage-bAInd__Platypus2-70B)
122
+
123
+ | Metric | Value |
124
+ |-----------------------|---------------------------|
125
+ | Avg. | 64.16 |
126
+ | ARC (25-shot) | 70.65 |
127
+ | HellaSwag (10-shot) | 87.15 |
128
+ | MMLU (5-shot) | 70.08 |
129
+ | TruthfulQA (0-shot) | 52.37 |
130
+ | Winogrande (5-shot) | 84.37 |
131
+ | GSM8K (5-shot) | 33.06 |
132
+ | DROP (3-shot) | 51.41 |