Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ tags:
|
|
20 |
<p align="center">
|
21 |
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
|
22 |
|
23 |
-
<a href="https://huggingface.co/SeaLLMs/
|
24 |
|
25 |
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
|
26 |
|
@@ -40,7 +40,9 @@ We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Mo
|
|
40 |
|
41 |
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
|
42 |
|
43 |
-
This page introduces the SeaLLMs-v3-7B-Chat model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
|
|
|
|
|
44 |
|
45 |
|
46 |
### Get started with `Transformers`
|
@@ -53,11 +55,11 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
53 |
device = "cuda" # the device to load the model onto
|
54 |
|
55 |
model = AutoModelForCausalLM.from_pretrained(
|
56 |
-
"SeaLLMs/
|
57 |
torch_dtype=torch.bfloat16,
|
58 |
device_map=device
|
59 |
)
|
60 |
-
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/
|
61 |
|
62 |
# prepare messages to model
|
63 |
prompt = "Hiii How are you?"
|
@@ -89,11 +91,11 @@ from transformers import TextStreamer
|
|
89 |
device = "cuda" # the device to load the model onto
|
90 |
|
91 |
model = AutoModelForCausalLM.from_pretrained(
|
92 |
-
"SeaLLMs/
|
93 |
torch_dtype=torch.bfloat16,
|
94 |
device_map=device
|
95 |
)
|
96 |
-
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/
|
97 |
|
98 |
# prepare messages to model
|
99 |
messages = [
|
|
|
20 |
<p align="center">
|
21 |
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
|
22 |
|
23 |
+
<a href="https://huggingface.co/SeaLLMs/SeaLLMs-v3-7B-Chat" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
|
24 |
|
25 |
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
|
26 |
|
|
|
40 |
|
41 |
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
|
42 |
|
43 |
+
This page introduces the **SeaLLMs-v3-7B-Chat** model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
|
44 |
+
|
45 |
+
You may also refer to the [SeaLLMs-v3-1.5B-Chat](https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat) model which requires much lower computational resources and can be easily loaded locally.
|
46 |
|
47 |
|
48 |
### Get started with `Transformers`
|
|
|
55 |
device = "cuda" # the device to load the model onto
|
56 |
|
57 |
model = AutoModelForCausalLM.from_pretrained(
|
58 |
+
"SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
|
59 |
torch_dtype=torch.bfloat16,
|
60 |
device_map=device
|
61 |
)
|
62 |
+
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
|
63 |
|
64 |
# prepare messages to model
|
65 |
prompt = "Hiii How are you?"
|
|
|
91 |
device = "cuda" # the device to load the model onto
|
92 |
|
93 |
model = AutoModelForCausalLM.from_pretrained(
|
94 |
+
"SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
|
95 |
torch_dtype=torch.bfloat16,
|
96 |
device_map=device
|
97 |
)
|
98 |
+
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
|
99 |
|
100 |
# prepare messages to model
|
101 |
messages = [
|