atsuki-yamaguchi commited on
Commit
df6f324
1 Parent(s): 6e2c3a5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: gemma
4
+ language:
5
+ - si
6
+ base_model: google/gemma-2-9b
7
+ library_name: transformers
8
+ ---
9
+ # Gemma2 9B for Sinhala: 1000 target vocabulary size + Random target vocabulary initialization + T&B2LS/MTP/512 training
10
+
11
+ This model is built on top of Gemma2 9B adapted for Sinhala using 30K target language sentences sampled from CC-100.
12
+
13
+ ## Model Details
14
+
15
+ * **Vocabulary**: This model has an additional 1000 target vocabulary.
16
+ * **Target vocabulary initialization**: The target weights of the embedding were initialized using Random initialization.
17
+ * **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the T&B2LS/MTP/512 strategies introduced in the paper.
18
+
19
+ ## Model Description
20
+
21
+ - **Language:** Sinhala
22
+ - **License:** Gemma Terms of Use
23
+ - **Fine-tuned from model:** google/gemma-2-9b
24
+
25
+
26
+ ## Model Sources
27
+
28
+ - **Repository:** https://github.com/gucci-j/lowres-cve
29
+ - **Paper:** https://arxiv.org/abs/2406.11477
30
+
31
+ ## How to Get Started with the Model
32
+ Use the code below to get started with the model.
33
+ ```python
34
+ from transformers import AutoTokenizer, AutoModelForCausalLM
35
+
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ "atsuki-yamaguchi/gemma-2-9b-si-30K-1000-rand"
38
+ )
39
+ tokenizer = AutoTokenizer.from_pretrained(
40
+ "atsuki-yamaguchi/gemma-2-9b-si-30K-1000-rand"
41
+ )
42
+ ```
43
+
44
+