Text Generation
PEFT
Safetensors
Eval Results
leaderboard-pr-bot commited on
Commit
228cec6
1 Parent(s): 798a7a8

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -209,4 +209,17 @@ See attached [Colab Notebook](https://huggingface.co/dfurman/Falcon-40B-Chat-v0.
209
  - `peft`: 0.4.0.dev0
210
  - `accelerate`: 0.19.0
211
  - `bitsandbytes`: 0.39.0
212
- - `einops`: 0.6.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
209
  - `peft`: 0.4.0.dev0
210
  - `accelerate`: 0.19.0
211
  - `bitsandbytes`: 0.39.0
212
+ - `einops`: 0.6.1
213
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
214
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__falcon-40b-openassistant-peft)
215
+
216
+ | Metric | Value |
217
+ |-----------------------|---------------------------|
218
+ | Avg. | 51.17 |
219
+ | ARC (25-shot) | 62.63 |
220
+ | HellaSwag (10-shot) | 85.59 |
221
+ | MMLU (5-shot) | 57.77 |
222
+ | TruthfulQA (0-shot) | 51.02 |
223
+ | Winogrande (5-shot) | 81.45 |
224
+ | GSM8K (5-shot) | 13.34 |
225
+ | DROP (3-shot) | 6.36 |