haoranxu commited on
Commit
b19765d
1 Parent(s): a408de8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md CHANGED
@@ -1,3 +1,45 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures superior translation accuracy and performance.
5
+
6
+ We release four translation models presented in the paper:
7
+ - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
8
+ - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
9
+ - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
10
+ - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
11
+
12
+ Model checkpoints are released at huggingface:
13
+ | Models | Base Model Link | LoRA Link |
14
+ |:-------------:|:---------------:|:---------:|
15
+ | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
16
+ | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
17
+ | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
18
+ | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
19
+
20
+ Note that Base Model Link for `*-LoRA` models are LLaMA-2 fine-tuned by monolingual data (20B for the 7B model and 12B for the 13B model)
21
+
22
+ A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
23
+ ```
24
+ import torch
25
+ from peft import PeftModel
26
+ from transformers import AutoModelForCausalLM
27
+ from transformers import LlamaTokenizer
28
+
29
+ # Load base model and LoRA weights
30
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
31
+ model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
32
+ tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
33
+
34
+ # Add the source setence into the prompt template
35
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
36
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
37
+
38
+ # Translation
39
+ with torch.no_grad():
40
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
41
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
42
+ print(outputs)
43
+ ```
44
+
45
+ Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)