Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags: []
|
|
7 |
|
8 |
The LION-series are trained using an **empirically optimized pipeline** that consists of three stages: SFT, DPO, and online preference learning (online DPO). We find simple techniques such as sequence packing, loss masking in SFT, increasing the preference dataset size in DPO, and online DPO training can significantly improve the performance of language models. Our best models (the LION-series) **exceed the performance of the official instruct models** tuned with closed-source data and algorithms.
|
9 |
|
10 |
-
For training datasets, code, and evaluation scripts, please refer to our paper and codebase
|
11 |
|
12 |
|
13 |
## Model description
|
@@ -84,15 +84,24 @@ print(prompt)
|
|
84 |
|
85 |
### Training details
|
86 |
|
87 |
-
Please refer to our
|
88 |
|
89 |
|
90 |
-
|
91 |
|
92 |
If you find this model useful in your work, please consider citing our paper:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
```
|
94 |
-
@misc{tmp}
|
95 |
-
``` -->
|
96 |
|
97 |
## Acknowledgements
|
98 |
|
|
|
7 |
|
8 |
The LION-series are trained using an **empirically optimized pipeline** that consists of three stages: SFT, DPO, and online preference learning (online DPO). We find simple techniques such as sequence packing, loss masking in SFT, increasing the preference dataset size in DPO, and online DPO training can significantly improve the performance of language models. Our best models (the LION-series) **exceed the performance of the official instruct models** tuned with closed-source data and algorithms.
|
9 |
|
10 |
+
For training datasets, code, and evaluation scripts, please refer to our [paper](https://arxiv.org/abs/2407.06542) and [codebase](https://github.com/Columbia-NLP-Lab/LionAlignment).
|
11 |
|
12 |
|
13 |
## Model description
|
|
|
84 |
|
85 |
### Training details
|
86 |
|
87 |
+
Please refer to our [paper](https://arxiv.org/abs/2407.06542) and [codebase](https://github.com/Columbia-NLP-Lab/LionAlignment).
|
88 |
|
89 |
|
90 |
+
## Citation Information
|
91 |
|
92 |
If you find this model useful in your work, please consider citing our paper:
|
93 |
+
|
94 |
+
```
|
95 |
+
@misc{yu2024lionsempiricallyoptimizedapproach,
|
96 |
+
title={LIONs: An Empirically Optimized Approach to Align Language Models},
|
97 |
+
author={Xiao Yu and Qingyang Wu and Yu Li and Zhou Yu},
|
98 |
+
year={2024},
|
99 |
+
eprint={2407.06542},
|
100 |
+
archivePrefix={arXiv},
|
101 |
+
primaryClass={cs.CL},
|
102 |
+
url={https://arxiv.org/abs/2407.06542},
|
103 |
+
}
|
104 |
```
|
|
|
|
|
105 |
|
106 |
## Acknowledgements
|
107 |
|