You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You agree to not use the models for any harmful, inappropriate, unethical or illegal purpose or intention. You agree to perform your own red teaming and provide related safety and security measures before deployment for any product relevant to our models and demos, and you must abide by and comply with local governance and regulations. In no event shall the models' authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. The models and demos may be subject to export controls or restrictions in the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the demos.
Log in or Sign Up to review the conditions and access this model content.
SeaLLMs - Large Language Models for Southeast Asia
๐ค Tech Memo ๐ค DEMO Github Technical Report
SeaLLM-hybrid-7b
This is a 7B pre-train & SFT hybrid version of SeaLLMs. It supports Vietnamese ๐ป๐ณ, Indonesian ๐ฎ๐ฉ, Thai ๐น๐ญ, Malay ๐ฒ๐พ, Khmer ๐ฐ๐ญ, Lao ๐ฑ๐ฆ, Tagalog ๐ต๐ญ and Burmese ๐ฒ๐ฒ. SeaLLM-hybrid-7b is pre-trained from Llama-2 with unlabeled raw text, and then fine-tuned with a mix of English-only SFT data and unlabeled text from other languages.
This hybrid model should be treated as a base model and should not be expected to perform instruction-following, but instead should be used for few-shot prompting.
It may have lower capability and performance than the 13B models but it is much more memory-efficient and faster.
Visit our Technical Report and ๐ค Tech Memo for more details.
Terms of Use and License: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our SeaLLMs Terms Of Use.
Disclaimer: We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
The logo was generated by DALL-E 3.
How to Run:
SeaLLM models work the same way as Llama-2, so the Llama-2 generation codebase should be sufficient to run.
Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [email protected]
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
- Downloads last month
- 0