Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,8 @@ It is suitable for use if you have substantial fine-tuning data to tune it for y
|
|
17 |
|
18 |
The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese.
|
19 |
|
|
|
|
|
20 |
Practicality-wise:
|
21 |
- Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
|
22 |
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
|
|
|
17 |
|
18 |
The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese.
|
19 |
|
20 |
+
For details of this model please read our [paper](https://arxiv.org/abs/).
|
21 |
+
|
22 |
Practicality-wise:
|
23 |
- Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
|
24 |
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
|