File size: 5,475 Bytes
dfd9f0e a833a3c a333f3a a833a3c 85698cd a333f3a a833a3c 285bcb2 a833a3c 594f101 a833a3c b66dee6 a833a3c 3eeff15 a833a3c 3eeff15 a833a3c b66dee6 a833a3c b66dee6 285bcb2 b66dee6 a833a3c b66dee6 a833a3c d66671c a333f3a a833a3c a333f3a b66dee6 a333f3a d66671c 62f88ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 |
---
license: unknown
library_name: peft
tags:
- llama-2
datasets:
- ehartford/dolphin
- garage-bAInd/Open-Platypus
inference: false
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-hf
---
<div align="center">
<img src="./assets/llama.png" width="150px">
</div>
# Llama-2-7B-Instruct-v0.1
This instruction model was built via parameter-efficient QLoRA finetuning of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 2 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
## Benchmark metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 46.63 |
| ARC (25-shot) | 51.19 |
| HellaSwag (10-shot) | 78.92 |
| TruthfulQA (0-shot) | 48.5 |
| Avg. | 56.31 |
We use the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as Hugging Face's [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
## Helpful links
* Model license: coming
* Basic usage: coming
* Finetuning code: coming
* Loss curves: coming
* Runtime stats: coming
## Loss curve
![loss curve](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/sep_12_23_9_20_00_log_loss_curves_Llama-2-7b-instruct.png)
The above loss curve was generated from the run's private wandb.ai log.
## Limitations and biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## How to use
* [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
```python
!pip install -q -U huggingface_hub peft transformers torch accelerate
```
```python
from huggingface_hub import notebook_login
import torch
from peft import PeftModel, PeftConfig
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
pipeline,
)
notebook_login()
```
```python
peft_model_id = "dfurman/Llama-2-7B-Instruct-v0.1"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
use_auth_token=True,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
format_template = "You are a helpful assistant. {query}\n"
```
```python
# First, format the prompt
query = "Tell me a recipe for vegan banana bread."
prompt = format_template.format(query=query)
# Inference can be done using model.generate
print("\n\n*** Generate:")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast("cuda", dtype=torch.bfloat16):
output = model.generate(
input_ids=input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
repetition_penalty=1.2,
)
print(tokenizer.decode(output["sequences"][0], skip_special_tokens=True))
```
## Runtime tests
coming
## Acknowledgements
This model was finetuned by Daniel Furman on Sep 10, 2023 and is for research applications only.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## meta-llama/Llama-2-7b-hf citation
```
coming
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
## Framework versions
- PEFT 0.6.0.dev0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__llama-2-7b-instruct-peft)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 44.5 |
| ARC (25-shot) | 51.19 |
| HellaSwag (10-shot) | 78.92 |
| MMLU (5-shot) | 46.63 |
| TruthfulQA (0-shot) | 48.5 |
| Winogrande (5-shot) | 74.43 |
| GSM8K (5-shot) | 5.99 |
| DROP (3-shot) | 5.82 |
|