Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
base_model: princeton-nlp/Sheared-LLaMA-1.3B
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
6 |
model-index:
|
7 |
- name: out
|
8 |
results: []
|
@@ -96,3 +96,17 @@ The following hyperparameters were used during training:
|
|
96 |
- Pytorch 2.0.1+cu117
|
97 |
- Datasets 2.15.0
|
98 |
- Tokenizers 0.15.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
+
base_model: princeton-nlp/Sheared-LLaMA-1.3B
|
6 |
model-index:
|
7 |
- name: out
|
8 |
results: []
|
|
|
96 |
- Pytorch 2.0.1+cu117
|
97 |
- Datasets 2.15.0
|
98 |
- Tokenizers 0.15.0
|
99 |
+
|
100 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
101 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Dans-DiscountModels__ShearedLlama-1.3b-FFT-Test1)
|
102 |
+
|
103 |
+
| Metric |Value|
|
104 |
+
|---------------------------------|----:|
|
105 |
+
|Avg. |35.71|
|
106 |
+
|AI2 Reasoning Challenge (25-Shot)|32.68|
|
107 |
+
|HellaSwag (10-Shot) |59.99|
|
108 |
+
|MMLU (5-Shot) |25.69|
|
109 |
+
|TruthfulQA (0-shot) |36.97|
|
110 |
+
|Winogrande (5-shot) |58.72|
|
111 |
+
|GSM8k (5-shot) | 0.23|
|
112 |
+
|