yam-peleg commited on
Commit
a8ef128
โ€ข
1 Parent(s): 48e2b9c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: gemma-terms-of-use
4
+ license_link: https://ai.google.dev/gemma/terms
5
+ language:
6
+ - en
7
+ - he
8
+ library_name: transformers
9
+ ---
10
+ # Hebrew-Gemma-11B-V2
11
+
12
+ > **TL;DR:** Continued pretraining of the previous hebrew base model & bug fixes.
13
+
14
+ ### Base Models:
15
+ - **07.03.2024:** [Hebrew-Gemma-11B](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B)
16
+ - **16.03.2024:** [Hebrew-Gemma-11B-V2](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-V2)
17
+
18
+ ### Instruct Models:
19
+ - **07.03.2024:** [Hebrew-Gemma-11B-Instruct](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-Instruct)
20
+
21
+ Hebrew-Gemma-11B is an open-source Large Language Model (LLM) is a hebrew/english pretrained generative text model with 11 billion parameters, based on the Gemma-7B architecture from Google.
22
+
23
+ It is continued pretrain of gemma-7b, extended to a larger scale and trained on 3B additional tokens of both English and Hebrew text data.
24
+
25
+ The resulting model Gemma-11B is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.
26
+
27
+
28
+ ### Terms of Use
29
+
30
+ As an extention of Gemma-7B, this model is subject to the original license and terms of use by Google.
31
+
32
+ **Gemma-7B original Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
33
+
34
+ ### Usage
35
+
36
+ Below are some code snippets on how to get quickly started with running the model.
37
+
38
+ First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
39
+
40
+ ### Running on CPU
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer, AutoModelForCausalLM
44
+
45
+ tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B")
46
+ model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B")
47
+
48
+ input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
49
+ input_ids = tokenizer(input_text, return_tensors="pt")
50
+
51
+ outputs = model.generate(**input_ids)
52
+ print(tokenizer.decode(outputs[0]))
53
+ ```
54
+
55
+ ### Running on GPU
56
+
57
+ ```python
58
+ from transformers import AutoTokenizer, AutoModelForCausalLM
59
+
60
+ tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B")
61
+ model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B", device_map="auto")
62
+
63
+ input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
64
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
65
+
66
+ outputs = model.generate(**input_ids)
67
+ print(tokenizer.decode(outputs[0]))
68
+ ```
69
+
70
+ ### Running with 4-Bit precision
71
+
72
+ ```python
73
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B")
76
+ model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B", quantization_config = BitsAndBytesConfig(load_in_4bit=True))
77
+
78
+ input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
79
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
80
+
81
+ outputs = model.generate(**input_ids)
82
+ print(tokenizer.decode(outputs[0])
83
+ ```
84
+
85
+ ### Benchmark Results
86
+
87
+ - Coming Soon!
88
+
89
+
90
+ ### Notice
91
+
92
+ Hebrew-Gemma-11B is a pretrained base model and therefore does not have any moderation mechanisms.
93
+
94
+
95
+ ### Authors
96
+
97
+ - Trained by Yam Peleg.
98
+ - In collaboration with Jonathan Rouach and Arjeo, inc.