File size: 4,314 Bytes
cce27e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
datasets:
  - sciq
  - metaeval/ScienceQA_text_only
  - GAIR/lima
  - Open-Orca/OpenOrca
  - openbookqa
language:
- en
tags:
- upstage
- llama
- instruct
- instruction
pipeline_tag: text-generation
---
# LLaMa-2-70b-instruct-1024 model card

## Model Details

* **Developed by**: [Upstage](https://en.upstage.ai)
* **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/Llama-2-70b-instruct-1024/discussions)
* **Contact**: For questions and comments about the model, please email `[email protected]`

## Dataset Details

### Used Datasets

- [openbookqa](https://huggingface.co/datasets/openbookqa)
- [sciq](https://huggingface.co/datasets/sciq)
- [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
- [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)

> No other data was used except for the dataset mentioned above

### Prompt Template
```
### System:
{System}

### User:
{User}

### Assistant:
{Assistant}
```

## Hardware and Software

* **Hardware**: We utilized an A100x8 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer)

## Evaluation Results

### Overview
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).

### Main Results
| Model                                         | Average | ARC   | HellaSwag | MMLU  | TruthfulQA |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
  | Llama-2-70b-instruct-1024 (***Ours***, ***Local Reproduction***) | **72.02** | **70.73** | **87.41** | **69.27** | **60.68** |
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 |
| llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
| Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
| llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |

### Scripts
- Prepare evaluation environments:
```
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git

# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463

# change to the repository directory
cd lm-evaluation-harness
```

## Ethical Issues

### Ethical Considerations
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.

## Contact Us

### Why Upstage LLM?
- [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 30B model **outperforms all models around the world**,  positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact].

[click here to contact]: mailto:[email protected]