bokyeong1015 commited on
Commit
6a5ea8b
1 Parent(s): 2c2cf4e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Shortened LLM Model Card
2
+
3
+ Shortened LLM is a depth-pruned version of large language models for efficient text generation.
4
+
5
+ - **Developed by:** [Nota AI](https://www.nota.ai/)
6
+ - **License:** Non-commercial license
7
+ - **Repository:** https://github.com/Nota-NetsPresso/shortened-llm
8
+ - **Paper:** https://arxiv.org/abs/2402.02834
9
+
10
+ ## Compression Method
11
+ * After identifying unimportant Transformer blocks, we perform **one-shot pruning**.
12
+ * In retraining pruned models for quality recovery, we leverage **continued pretraining (CPT)**, which involves updating all parameters, on a large-scale pretraining corpus.
13
+ * Once CPT is completed, the model in this card is further finetuned with **low-rank adaptation (LoRA)** on an instruction tuning dataset.
14
+
15
+ ## Models from Aggressive Pruning & CPT Retraining (arXiv-v2):
16
+ | Source<br>Model | Pruning<br>Ratio | Pruning<br>Criterion | Retraining<br>Method | HF Models<br>Link |
17
+ |:---:|:---:|:---:|:---:| :---:|
18
+ | Vicuna-v1.3-7B | 20% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-5.5b-ppl) |
19
+ | Vicuna-v1.3-7B | 45% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl) |
20
+ | Vicuna-v1.3-7B | 60% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-2.7b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-2.7b-ppl) |
21
+ | Vicuna-v1.3-7B | 80% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-1.5b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-1.5b-ppl) |
22
+ | Vicuna-v1.3-7B | 20% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-5.5b-ppl) |
23
+ | Vicuna-v1.3-7B | 45% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-3.7b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-3.7b-ppl) |
24
+ | Vicuna-v1.3-7B | 60% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-2.7b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-2.7b-ppl) |
25
+ | Vicuna-v1.3-7B | 80% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-1.5b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-1.5b-ppl) |
26
+
27
+ <details>
28
+ <summary>
29
+ Click to see the results:
30
+ </summary>
31
+
32
+ - EleutherAI/lm-evaluation-harness version [3326c54](https://github.com/EleutherAI/lm-evaluation-harness/tree/3326c547a733d598b4377e54be96e194861b964c)
33
+
34
+ <img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st_llm-cpt_results.png" width="100%">
35
+
36
+ </details>
37
+
38
+ #### Experimental Setup for CPT of Pruned Vicuna-7B
39
+ * Dataset: [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B)
40
+ * Training using 8 NVIDIA H100 GPUs.
41
+ * 5.5B parameters: 37B training tokens (for 6 days)
42
+ * 3.7B parameters: 74B tokens (for 8 days)
43
+ * 2.7B parameters: 150B tokens (for 12 days)
44
+ * 1.5B parameters: 271B tokens (for 11 days)
45
+ * AdamW optimizer with (β1, β2)=(0.9, 0.95); a learning rate of 0.0001; a weight decay of 0.1.
46
+ * Global batch size: 512 (micro-batch size of 2 × 32 gradient accumulation steps × 8 GPUs).
47
+
48
+ <details>
49
+ <summary>
50
+ Click to see the learning curve:
51
+ </summary>
52
+
53
+ **Zero-shot performance over the course of training for models from Vicuna-7B-v1.3 at different pruning ratios.** For each model size, the CPT duration was limited to a two-week period, but additional training could further improve the quality.
54
+
55
+ <img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st_llm-cpt_learning-curve.png" width="100%">
56
+
57
+ </details>
58
+
59
+ #### Experimental Setup for LoRA Instruction Tuning
60
+ * Dataset: [Refined Alpaca](https://huggingface.co/datasets/yahma/alpaca-cleaned)
61
+ * Training using 1 NVIDIA A100 GPU.
62
+ * The retraining costs are low, with the entire process being executed on a single GPU.
63
+ * For example, LoRA retraining of a 20%-pruned model from 7B parameters requires about 2 hours and 22GB VRAM.
64
+ * A LoRA rank of 8; AdamW optimizer with a learning rate of 0.0001.
65
+ * A batch size of 64 over 2 epochs.
66
+
67
+
68
+ ## Models from Moderate Pruning & LoRA Retraining (arXiv-v1):
69
+ | Source<br>Model | Pruning<br>Ratio | Pruning<br>Criterion | HF Models<br>Link |
70
+ |:---:|:---:|:---:|:---:|
71
+ | LLaMA-1-7B | 20% | PPL | [nota-ai/st-llama-1-5.5b-ppl](https://huggingface.co/nota-ai/st-llama-1-5.5b-ppl) |
72
+ | LLaMA-1-7B | 20% | Taylor+ | [nota-ai/st-llama-1-5.5b-taylor](https://huggingface.co/nota-ai/st-llama-1-5.5b-taylor) |
73
+ | Vicuna-v1.3-7B | 20% | PPL | [nota-ai/st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-ppl) |
74
+ | Vicuna-v1.3-7B | 20% | Taylor+ | [nota-ai/st-vicuna-v1.3-5.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-taylor) |
75
+ | Vicuna-v1.3-13B | 21% | PPL | [nota-ai/st-vicuna-v1.3-10.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-ppl) |
76
+ | Vicuna-v1.3-13B | 21% | Taylor+ | [nota-ai/st-vicuna-v1.3-10.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-taylor) |
77
+
78
+ <details>
79
+
80
+ <summary>
81
+ Click to see the results:
82
+ </summary>
83
+
84
+ - EleutherAI/lm-evaluation-harness version [3326c54](https://github.com/EleutherAI/lm-evaluation-harness/tree/3326c547a733d598b4377e54be96e194861b964c)
85
+
86
+ <img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st-llama_zero-shot_scores.png" width="100%">
87
+
88
+ </details>
89
+
90
+ ## License
91
+ - All rights related to this repository and the compressed models are reserved by Nota Inc.
92
+ - The intended use is strictly limited to research and non-commercial projects.
93
+
94
+ ## Acknowledgments
95
+ - [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) and [Gwangju AICA](http://www.aica-gj.kr/main.php) for generously providing GPU resources.
96
+ - [LLM-Pruner](https://github.com/horseee/LLM-Pruner), which utilizes [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness), [PEFT](https://github.com/huggingface/peft), and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). Thanks for the pioneering work on structured pruning of LLMs!
97
+ - [LLaMA](https://github.com/facebookresearch/llama), [Vicuna](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md), [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), and [Alpaca-Cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). Thanks for the open-source LLMs and data!
98
+
99
+ ## Citation
100
+ ```bibtex
101
+ @article{kim2024shortened,
102
+ title={Shortened LLaMA: Depth Pruning for Large Language Models with Comparison of Retraining Methods},
103
+ author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
104
+ journal={arXiv preprint arXiv:2402.02834},
105
+ year={2024},
106
+ url={https://arxiv.org/abs/2402.02834}
107
+ }
108
+ ```
109
+ ```bibtex
110
+ @article{kim2024mefomo,
111
+ title={Shortened LLaMA: A Simple Depth Pruning for Large Language Models},
112
+ author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
113
+ journal={ICLR Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)},
114
+ year={2024},
115
+ url={https://openreview.net/forum?id=18VGxuOdpu}
116
+ }
117
+ ```