File size: 4,140 Bytes
68e43c0
 
 
 
9a50685
68e43c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a50685
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: princeton-nlp/Sheared-LLaMA-1.3B
model-index:
- name: out
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# out

This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5764

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 300
- num_epochs: 1

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0341        | 0.03  | 30   | 1.7322          |
| 1.7952        | 0.05  | 60   | 1.7122          |
| 1.8476        | 0.08  | 90   | 1.7011          |
| 2.1762        | 0.1   | 120  | 1.6919          |
| 2.0459        | 0.13  | 150  | 1.6838          |
| 1.7417        | 0.16  | 180  | 1.6761          |
| 1.7508        | 0.18  | 210  | 1.6685          |
| 1.8539        | 0.21  | 240  | 1.6612          |
| 1.7672        | 0.24  | 270  | 1.6559          |
| 1.7327        | 0.26  | 300  | 1.6521          |
| 1.9346        | 0.29  | 330  | 1.6458          |
| 1.8972        | 0.31  | 360  | 1.6432          |
| 1.545         | 0.34  | 390  | 1.6394          |
| 1.6737        | 0.37  | 420  | 1.6351          |
| 1.9233        | 0.39  | 450  | 1.6305          |
| 1.6822        | 0.42  | 480  | 1.6274          |
| 1.3781        | 0.44  | 510  | 1.6243          |
| 1.8232        | 0.47  | 540  | 1.6209          |
| 1.6995        | 0.5   | 570  | 1.6178          |
| 1.9164        | 0.52  | 600  | 1.6145          |
| 1.8104        | 0.55  | 630  | 1.6116          |
| 1.9563        | 0.58  | 660  | 1.6098          |
| 1.9536        | 0.6   | 690  | 1.6063          |
| 1.9269        | 0.63  | 720  | 1.6043          |
| 1.6234        | 0.65  | 750  | 1.6026          |
| 1.7635        | 0.68  | 780  | 1.5994          |
| 1.2534        | 0.71  | 810  | 1.5966          |
| 1.8849        | 0.73  | 840  | 1.5951          |
| 1.8618        | 0.76  | 870  | 1.5925          |
| 1.8688        | 0.79  | 900  | 1.5896          |
| 1.9419        | 0.81  | 930  | 1.5877          |
| 1.637         | 0.84  | 960  | 1.5878          |
| 1.4612        | 0.86  | 990  | 1.5848          |
| 1.5509        | 0.89  | 1020 | 1.5832          |
| 1.706         | 0.92  | 1050 | 1.5816          |
| 1.8552        | 0.94  | 1080 | 1.5797          |
| 1.7589        | 0.97  | 1110 | 1.5791          |
| 1.4988        | 0.99  | 1140 | 1.5764          |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Dans-DiscountModels__ShearedLlama-1.3b-FFT-Test1)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |35.71|
|AI2 Reasoning Challenge (25-Shot)|32.68|
|HellaSwag (10-Shot)              |59.99|
|MMLU (5-Shot)                    |25.69|
|TruthfulQA (0-shot)              |36.97|
|Winogrande (5-shot)              |58.72|
|GSM8k (5-shot)                   | 0.23|