Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
|
|
12 |
# LongAlign-13B-64k-base
|
13 |
|
14 |
<p align="center">
|
15 |
-
π€ <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/" target="_blank">[LongAlign Paper]</a>
|
16 |
</p>
|
17 |
|
18 |
**LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
|
|
|
12 |
# LongAlign-13B-64k-base
|
13 |
|
14 |
<p align="center">
|
15 |
+
π€ <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a>
|
16 |
</p>
|
17 |
|
18 |
**LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
|