Safetensors
llama
haoranxu commited on
Commit
6b5c31b
1 Parent(s): f53cf10

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - oscar-corpus/OSCAR-2301
5
+ - allenai/nllb
6
+ - Helsinki-NLP/opus-100
7
+ language:
8
+ - hu
9
+ - el
10
+ - cs
11
+ - pl
12
+ - lt
13
+ - lv
14
+ base_model:
15
+ - haoranxu/ALMA-13B-Pretrain
16
+ - meta-llama/Llama-2-13b-hf
17
+ ---
18
+
19
+
20
+ X-ALMA builds upon [ALMA-R](https://arxiv.org/pdf/2401.08417) by expanding support from 6 to 50 languages. It utilizes a plug-and-play architecture with language-specific modules, complemented by a carefully designed training recipe. This release includes the **language-specific X-ALMA LoRA module and a merged model that supports the languages in Group 5: English (en), Hungarian (hu), Greek (el), Czech (cs), Polish (pl), Lithuanian (lt), and Latvian (lv)**.
21
+
22
+ Model X-ALMA checkpoints are released at huggingface:
23
+ | Models | Base Model Link | Description |
24
+ |:-------------:|:---------------:|:---------------:|
25
+ | X-ALMA | [haoranxu/X-ALMA]([https://huggingface.co/haoranxu/ALMA-7B](https://huggingface.co/haoranxu/X-ALMA)) | X-ALMA model with all its modules |
26
+ | X-ALMA-13B-Pretrain | [haoranxu/X-ALMA-13B-Pretrain](https://huggingface.co/haoranxu/X-ALMA-13B-Pretrain) | X-ALMA 13B multilingual pre-trained base model |
27
+ | X-ALMA-Group1 | [haoranxu/X-ALMA-13B-Group1](https://huggingface.co/haoranxu/X-ALMA-13B-Group1) | X-ALMA group1 specific module and the merged model |
28
+ | X-ALMA-Group2 | [haoranxu/X-ALMA-13B-Group2](https://huggingface.co/haoranxu/X-ALMA-13B-Group2) | X-ALMA group2 specific module and the merged model |
29
+ | X-ALMA-Group3 | [haoranxu/X-ALMA-13B-Group3](https://huggingface.co/haoranxu/X-ALMA-13B-Group3) | X-ALMA group3 specific module and the merged model |
30
+ | X-ALMA-Group4 | [haoranxu/X-ALMA-13B-Group4](https://huggingface.co/haoranxu/X-ALMA-13B-Group4) | X-ALMA group4 specific module and the merged model |
31
+ | X-ALMA-Group5 | [haoranxu/X-ALMA-13B-Group5](https://huggingface.co/haoranxu/X-ALMA-13B-Group5) | X-ALMA group5 specific module and the merged model |
32
+ | X-ALMA-Group6 | [haoranxu/X-ALMA-13B-Group6](https://huggingface.co/haoranxu/X-ALMA-13B-Group6) | X-ALMA group6 specific module and the merged model |
33
+ | X-ALMA-Group7 | [haoranxu/X-ALMA-13B-Group7](https://huggingface.co/haoranxu/X-ALMA-13B-Group7) | X-ALMA group7 specific module and the merged model |
34
+ | X-ALMA-Group8 | [haoranxu/X-ALMA-13B-Group8](https://huggingface.co/haoranxu/X-ALMA-13B-Group8) | X-ALMA group8 specific module and the merged model |
35
+
36
+ ## A quick start:
37
+ There are three ways to load X-ALMA for translation. An example of translating "我爱机器翻译。" into English (X-ALMA should also able to do multilingual open-ended QA).
38
+
39
+ **The first way**: loading the merged model where the language-specific module has been merged into the base model **(Recommended)**:
40
+ ```
41
+ import torch
42
+ from transformers import AutoModelForCausalLM
43
+ from transformers import AutoTokenizer
44
+ from peft import PeftModel
45
+
46
+ GROUP2LANG = {
47
+ 1: ["da", "nl", "de", "is", "no", "sv", "af"],
48
+ 2: ["ca", "ro", "gl", "it", "pt", "es"],
49
+ 3: ["bg", "mk", "sr", "uk", "ru"],
50
+ 4: ["id", "ms", "th", "vi", "mg", "fr"],
51
+ 5: ["hu", "el", "cs", "pl", "lt", "lv"],
52
+ 6: ["ka", "zh", "ja", "ko", "fi", "et"],
53
+ 7: ["gu", "hi", "mr", "ne", "ur"],
54
+ 8: ["az", "kk", "ky", "tr", "uz", "ar", "he", "fa"],
55
+ }
56
+ LANG2GROUP = {lang: str(group) for group, langs in GROUP2LANG.items() for lang in langs}
57
+ group_id = LANG2GROUP["zh"]
58
+
59
+ model = AutoModelForCausalLM.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", torch_dtype=torch.float16, device_map="auto")
60
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
61
+
62
+ # Add the source sentence into the prompt template
63
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
64
+
65
+ # X-ALMA needs chat template but ALMA and ALMA-R don't need it.
66
+ chat_style_prompt = [{"role": "user", "content": prompt}]
67
+ prompt = tokenizer.apply_chat_template(chat_style_prompt, tokenize=False, add_generation_prompt=True)
68
+
69
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
70
+
71
+ # Translation
72
+ with torch.no_grad():
73
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
74
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
75
+ print(outputs)
76
+ ```
77
+
78
+ **The second way**: loading the base model and language-specific module **(Recommended)**:
79
+ ```
80
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/X-ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
81
+ model = PeftModel.from_pretrained(model, f"haoranxu/X-ALMA-13B-Group{group_id}")
82
+ tokenizer = AutoTokenizer.from_pretrained(f"haoranxu/X-ALMA-13B-Group{group_id}", padding_side='left')
83
+ ```
84
+
85
+ **The third way**: loading the base model with all language-specific modules like MoE: (Require large GPU memory)
86
+ ```
87
+ from modeling_xalma import XALMAForCausalLM
88
+ model = XALMAForCausalLM.from_pretrained("haoranxu/X-ALMA", torch_dtype=torch.float16, device_map="auto")
89
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/X-ALMA", padding_side='left')
90
+
91
+ # Add `lang="zh"`: specify the language to instruct the model on which group to use for the third loading method during generation.
92
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9, lang="zh")
93
+ ```