Update README.md
#1
by
shubhrapandit
- opened
README.md
CHANGED
@@ -50,17 +50,17 @@ Model evaluation metrics and results.
|
|
50 |
|
51 |
| Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned50-retrained-instruct |
|
52 |
|------------------------------------------------|---------------|-------------|-------------------------------|
|
53 |
-
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 |
|
54 |
-
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |
|
55 |
-
| [WinoGrande](https://arxiv.org/abs/1907.10641) |
|
56 |
-
| [ARC-c](https://arxiv.org/abs/1911.01547) |
|
57 |
-
| [TruthfulQA](https://arxiv.org/abs/2109.07958) |
|
58 |
-
| [
|
59 |
-
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
|
60 |
|
61 |
## Model Training Details
|
62 |
|
63 |
-
|
|
|
64 |
|
65 |
## Help
|
66 |
|
|
|
50 |
|
51 |
| Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned50-retrained-instruct |
|
52 |
|------------------------------------------------|---------------|-------------|-------------------------------|
|
53 |
+
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 48.60% | 45.10% |
|
54 |
+
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 79.45% | 78.86% |
|
55 |
+
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 75.69% | 72.61% |
|
56 |
+
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 53.92% | 50.77% |
|
57 |
+
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 0-shot | 43.63% | 44.40% |
|
58 |
+
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 15.92% | 16.38% |
|
|
|
59 |
|
60 |
## Model Training Details
|
61 |
|
62 |
+
This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) on a blend of [Open Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), 10% [Open Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and 10% [Dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin) datasets.
|
63 |
+
Training was perfomerd for 2 epochs.
|
64 |
|
65 |
## Help
|
66 |
|