RichardErkhov commited on
Commit
393cc61
1 Parent(s): b376d01

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +674 -0
README.md ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ SystemGemma2-2b-it - GGUF
11
+ - Model creator: https://huggingface.co/piotr25691/
12
+ - Original model: https://huggingface.co/piotr25691/SystemGemma2-2b-it/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [SystemGemma2-2b-it.Q2_K.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q2_K.gguf) | Q2_K | 1.15GB |
18
+ | [SystemGemma2-2b-it.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
19
+ | [SystemGemma2-2b-it.IQ3_S.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.IQ3_S.gguf) | IQ3_S | 1.27GB |
20
+ | [SystemGemma2-2b-it.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
21
+ | [SystemGemma2-2b-it.IQ3_M.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.IQ3_M.gguf) | IQ3_M | 1.3GB |
22
+ | [SystemGemma2-2b-it.Q3_K.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q3_K.gguf) | Q3_K | 1.36GB |
23
+ | [SystemGemma2-2b-it.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
24
+ | [SystemGemma2-2b-it.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
25
+ | [SystemGemma2-2b-it.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
26
+ | [SystemGemma2-2b-it.Q4_0.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q4_0.gguf) | Q4_0 | 1.52GB |
27
+ | [SystemGemma2-2b-it.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
28
+ | [SystemGemma2-2b-it.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
29
+ | [SystemGemma2-2b-it.Q4_K.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q4_K.gguf) | Q4_K | 1.59GB |
30
+ | [SystemGemma2-2b-it.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
31
+ | [SystemGemma2-2b-it.Q4_1.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q4_1.gguf) | Q4_1 | 1.64GB |
32
+ | [SystemGemma2-2b-it.Q5_0.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q5_0.gguf) | Q5_0 | 1.75GB |
33
+ | [SystemGemma2-2b-it.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
34
+ | [SystemGemma2-2b-it.Q5_K.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q5_K.gguf) | Q5_K | 1.79GB |
35
+ | [SystemGemma2-2b-it.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
36
+ | [SystemGemma2-2b-it.Q5_1.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q5_1.gguf) | Q5_1 | 1.87GB |
37
+ | [SystemGemma2-2b-it.Q6_K.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q6_K.gguf) | Q6_K | 2.0GB |
38
+ | [SystemGemma2-2b-it.Q8_0.gguf](https://huggingface.co/RichardErkhov/piotr25691_-_SystemGemma2-2b-it-gguf/blob/main/SystemGemma2-2b-it.Q8_0.gguf) | Q8_0 | 2.59GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ base_model: google/gemma-2-2b-it
46
+ license: gemma
47
+ library_name: transformers
48
+ pipeline_tag: text-generation
49
+ extra_gated_heading: Access Gemma on Hugging Face
50
+ extra_gated_prompt: >-
51
+ To access Gemma on Hugging Face, you’re required to review and agree to
52
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
53
+ Face and click below. Requests are processed immediately.
54
+ extra_gated_button_content: Acknowledge license
55
+ tags:
56
+ - conversational
57
+ ---
58
+
59
+
60
+ # SystemGemma2 2B model card
61
+ This is a version of [Gemma 2 2B](https://huggingface.co/google/gemma-2-2b-it) with system prompts enabled.
62
+
63
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
64
+
65
+ **Resources and Technical Documentation**:
66
+
67
+ * [Responsible Generative AI Toolkit][rai-toolkit]
68
+ * [Gemma on Kaggle][kaggle-gemma]
69
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma]
70
+
71
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
72
+
73
+ **Authors**: Google
74
+
75
+ ## Model Information
76
+
77
+ Summary description and brief definition of inputs and outputs.
78
+
79
+ ### Description
80
+
81
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
82
+ built from the same research and technology used to create the Gemini models.
83
+ They are text-to-text, decoder-only large language models, available in English,
84
+ with open weights for both pre-trained variants and instruction-tuned variants.
85
+ Gemma models are well-suited for a variety of text generation tasks, including
86
+ question answering, summarization, and reasoning. Their relatively small size
87
+ makes it possible to deploy them in environments with limited resources such as
88
+ a laptop, desktop or your own cloud infrastructure, democratizing access to
89
+ state of the art AI models and helping foster innovation for everyone.
90
+
91
+ ### Usage
92
+
93
+ Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
94
+ ```sh
95
+ pip install -U transformers
96
+ ```
97
+
98
+ Then, copy the snippet from the section that is relevant for your usecase.
99
+
100
+ #### Running with the `pipeline` API
101
+
102
+ ```python
103
+ import torch
104
+ from transformers import pipeline
105
+
106
+ pipe = pipeline(
107
+ "text-generation",
108
+ model="google/gemma-2-2b-it",
109
+ model_kwargs={"torch_dtype": torch.bfloat16},
110
+ device="cuda", # replace with "mps" to run on a Mac device
111
+ )
112
+
113
+ messages = [
114
+ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
115
+ ]
116
+
117
+ outputs = pipe(messages, max_new_tokens=256)
118
+ assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
119
+ print(assistant_response)
120
+ # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
121
+ ```
122
+
123
+ #### Running the model on a single / multi GPU
124
+
125
+ ```python
126
+ # pip install accelerate
127
+ from transformers import AutoTokenizer, AutoModelForCausalLM
128
+ import torch
129
+
130
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
131
+ model = AutoModelForCausalLM.from_pretrained(
132
+ "google/gemma-2-2b-it",
133
+ device_map="auto",
134
+ torch_dtype=torch.bfloat16,
135
+ )
136
+
137
+ input_text = "Write me a poem about Machine Learning."
138
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
139
+
140
+ outputs = model.generate(**input_ids, max_new_tokens=32)
141
+ print(tokenizer.decode(outputs[0]))
142
+ ```
143
+
144
+ You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
145
+ ```python
146
+ messages = [
147
+ {"role": "user", "content": "Write me a poem about Machine Learning."},
148
+ ]
149
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
150
+
151
+ outputs = model.generate(**input_ids, max_new_tokens=256)
152
+ print(tokenizer.decode(outputs[0]))
153
+ ```
154
+
155
+ <a name="precisions"></a>
156
+ #### Running the model on a GPU using different precisions
157
+
158
+ The native weights of this model were exported in `bfloat16` precision.
159
+
160
+ You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
161
+
162
+ * _Upcasting to `torch.float32`_
163
+
164
+ ```python
165
+ # pip install accelerate
166
+ from transformers import AutoTokenizer, AutoModelForCausalLM
167
+
168
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
169
+ model = AutoModelForCausalLM.from_pretrained(
170
+ "google/gemma-2-2b-it",
171
+ device_map="auto",
172
+ )
173
+
174
+ input_text = "Write me a poem about Machine Learning."
175
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
176
+
177
+ outputs = model.generate(**input_ids, max_new_tokens=32)
178
+ print(tokenizer.decode(outputs[0]))
179
+ ```
180
+
181
+ #### Running the model through a CLI
182
+
183
+ The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
184
+ for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
185
+ for getting started, then launch the CLI through the following command:
186
+
187
+ ```shell
188
+ local-gemma --model 2b --preset speed
189
+ ```
190
+
191
+ #### Quantized Versions through `bitsandbytes`
192
+
193
+ <details>
194
+ <summary>
195
+ Using 8-bit precision (int8)
196
+ </summary>
197
+
198
+ ```python
199
+ # pip install bitsandbytes accelerate
200
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
201
+
202
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
203
+
204
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
205
+ model = AutoModelForCausalLM.from_pretrained(
206
+ "google/gemma-2-2b-it",
207
+ quantization_config=quantization_config,
208
+ )
209
+
210
+ input_text = "Write me a poem about Machine Learning."
211
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
212
+
213
+ outputs = model.generate(**input_ids, max_new_tokens=32)
214
+ print(tokenizer.decode(outputs[0]))
215
+ ```
216
+ </details>
217
+
218
+ <details>
219
+ <summary>
220
+ Using 4-bit precision
221
+ </summary>
222
+
223
+ ```python
224
+ # pip install bitsandbytes accelerate
225
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
226
+
227
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
228
+
229
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
230
+ model = AutoModelForCausalLM.from_pretrained(
231
+ "google/gemma-2-2b-it",
232
+ quantization_config=quantization_config,
233
+ )
234
+
235
+ input_text = "Write me a poem about Machine Learning."
236
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
237
+
238
+ outputs = model.generate(**input_ids, max_new_tokens=32)
239
+ print(tokenizer.decode(outputs[0]))
240
+ ```
241
+ </details>
242
+
243
+ #### Advanced Usage
244
+
245
+ <details>
246
+ <summary>
247
+ Torch compile
248
+ </summary>
249
+
250
+ [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
251
+ inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
252
+
253
+ Note that two warm-up steps are required before the full inference speed is realised:
254
+
255
+ ```python
256
+ import os
257
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
258
+
259
+ from transformers import AutoTokenizer, Gemma2ForCausalLM
260
+ from transformers.cache_utils import HybridCache
261
+ import torch
262
+
263
+ torch.set_float32_matmul_precision("high")
264
+
265
+ # load the model + tokenizer
266
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
267
+ model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
268
+ model.to("cuda")
269
+
270
+ # apply the torch compile transformation
271
+ model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
272
+
273
+ # pre-process inputs
274
+ input_text = "The theory of special relativity states "
275
+ model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
276
+ prompt_length = model_inputs.input_ids.shape[1]
277
+
278
+ # set-up k/v cache
279
+ past_key_values = HybridCache(
280
+ config=model.config,
281
+ max_batch_size=1,
282
+ max_cache_len=model.config.max_position_embeddings,
283
+ device=model.device,
284
+ dtype=model.dtype
285
+ )
286
+
287
+ # enable passing kv cache to generate
288
+ model._supports_cache_class = True
289
+ model.generation_config.cache_implementation = None
290
+
291
+ # two warm-up steps
292
+ for idx in range(2):
293
+ outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
294
+ past_key_values.reset()
295
+
296
+ # fast run
297
+ outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
298
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
299
+ ```
300
+
301
+ For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
302
+
303
+ </details>
304
+
305
+ ### Chat Template
306
+
307
+ The instruction-tuned models use a chat template that must be adhered to for conversational use.
308
+ The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
309
+
310
+ Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
311
+
312
+ ```py
313
+ from transformers import AutoTokenizer, AutoModelForCausalLM
314
+ import transformers
315
+ import torch
316
+
317
+ model_id = "google/gemma-2-2b-it"
318
+ dtype = torch.bfloat16
319
+
320
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
321
+ model = AutoModelForCausalLM.from_pretrained(
322
+ model_id,
323
+ device_map="cuda",
324
+ torch_dtype=dtype,)
325
+
326
+ chat = [
327
+ { "role": "user", "content": "Write a hello world program" },
328
+ ]
329
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
330
+ ```
331
+
332
+ At this point, the prompt contains the following text:
333
+
334
+ ```
335
+ <bos><start_of_turn>user
336
+ Write a hello world program<end_of_turn>
337
+ <start_of_turn>model
338
+ ```
339
+
340
+ As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
341
+ (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
342
+ the `<end_of_turn>` token.
343
+
344
+ You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
345
+ chat template.
346
+
347
+ After the prompt is ready, generation can be performed like this:
348
+
349
+ ```py
350
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
351
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
352
+ print(tokenizer.decode(outputs[0]))
353
+ ```
354
+
355
+ ### Inputs and outputs
356
+
357
+ * **Input:** Text string, such as a question, a prompt, or a document to be
358
+ summarized.
359
+ * **Output:** Generated English-language text in response to the input, such
360
+ as an answer to a question, or a summary of a document.
361
+
362
+ ### Citation
363
+
364
+ ```none
365
+ @article{gemma_2024,
366
+ title={Gemma},
367
+ url={https://www.kaggle.com/m/3301},
368
+ DOI={10.34740/KAGGLE/M/3301},
369
+ publisher={Kaggle},
370
+ author={Gemma Team},
371
+ year={2024}
372
+ }
373
+ ```
374
+
375
+ ## Model Data
376
+
377
+ Data used for model training and how the data was processed.
378
+
379
+ ### Training Dataset
380
+
381
+ These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
382
+ Here are the key components:
383
+
384
+ * Web Documents: A diverse collection of web text ensures the model is exposed
385
+ to a broad range of linguistic styles, topics, and vocabulary. Primarily
386
+ English-language content.
387
+ * Code: Exposing the model to code helps it to learn the syntax and patterns of
388
+ programming languages, which improves its ability to generate code or
389
+ understand code-related questions.
390
+ * Mathematics: Training on mathematical text helps the model learn logical
391
+ reasoning, symbolic representation, and to address mathematical queries.
392
+
393
+ The combination of these diverse data sources is crucial for training a powerful
394
+ language model that can handle a wide variety of different tasks and text
395
+ formats.
396
+
397
+ ### Data Preprocessing
398
+
399
+ Here are the key data cleaning and filtering methods applied to the training
400
+ data:
401
+
402
+ * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
403
+ applied at multiple stages in the data preparation process to ensure the
404
+ exclusion of harmful and illegal content.
405
+ * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
406
+ reliable, automated techniques were used to filter out certain personal
407
+ information and other sensitive data from training sets.
408
+ * Additional methods: Filtering based on content quality and safety in line with
409
+ [our policies][safety-policies].
410
+
411
+ ## Implementation Information
412
+
413
+ Details about the model internals.
414
+
415
+ ### Hardware
416
+
417
+ Gemma was trained using the latest generation of
418
+ [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
419
+
420
+ Training large language models requires significant computational power. TPUs,
421
+ designed specifically for matrix operations common in machine learning, offer
422
+ several advantages in this domain:
423
+
424
+ * Performance: TPUs are specifically designed to handle the massive computations
425
+ involved in training LLMs. They can speed up training considerably compared to
426
+ CPUs.
427
+ * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
428
+ for the handling of large models and batch sizes during training. This can
429
+ lead to better model quality.
430
+ * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
431
+ handling the growing complexity of large foundation models. You can distribute
432
+ training across multiple TPU devices for faster and more efficient processing.
433
+ * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
434
+ solution for training large models compared to CPU-based infrastructure,
435
+ especially when considering the time and resources saved due to faster
436
+ training.
437
+ * These advantages are aligned with
438
+ [Google's commitments to operate sustainably][sustainability].
439
+
440
+ ### Software
441
+
442
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
443
+
444
+ JAX allows researchers to take advantage of the latest generation of hardware,
445
+ including TPUs, for faster and more efficient training of large models.
446
+
447
+ ML Pathways is Google's latest effort to build artificially intelligent systems
448
+ capable of generalizing across multiple tasks. This is specially suitable for
449
+ [foundation models][foundation-models], including large language models like
450
+ these ones.
451
+
452
+ Together, JAX and ML Pathways are used as described in the
453
+ [paper about the Gemini family of models][gemini-2-paper]; "the 'single
454
+ controller' programming model of Jax and Pathways allows a single Python
455
+ process to orchestrate the entire training run, dramatically simplifying the
456
+ development workflow."
457
+
458
+ ## Evaluation
459
+
460
+ Model evaluation metrics and results.
461
+
462
+ ### Benchmark Results
463
+
464
+ These models were evaluated against a large collection of different datasets and
465
+ metrics to cover different aspects of text generation:
466
+
467
+ | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
468
+ | ------------------------------ | ------------- | ----------- | ------------ |
469
+ | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
470
+ | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
471
+ | [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
472
+ | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
473
+ | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
474
+ | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
475
+ | [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
476
+ | [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
477
+ | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
478
+ | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
479
+ | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
480
+ | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
481
+ | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
482
+ | [MATH][math] | 4-shot | 36.6 | 42.3 |
483
+ | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
484
+ | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
485
+ | ------------------------------ | ------------- | ----------- | ------------ |
486
+
487
+ ## Ethics and Safety
488
+
489
+ Ethics and safety evaluation approach and results.
490
+
491
+ ### Evaluation Approach
492
+
493
+ Our evaluation methods include structured evaluations and internal red-teaming
494
+ testing of relevant content policies. Red-teaming was conducted by a number of
495
+ different teams, each with different goals and human evaluation metrics. These
496
+ models were evaluated against a number of different categories relevant to
497
+ ethics and safety, including:
498
+
499
+ * Text-to-Text Content Safety: Human evaluation on prompts covering safety
500
+ policies including child sexual abuse and exploitation, harassment, violence
501
+ and gore, and hate speech.
502
+ * Text-to-Text Representational Harms: Benchmark against relevant academic
503
+ datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
504
+ * Memorization: Automated evaluation of memorization of training data, including
505
+ the risk of personally identifiable information exposure.
506
+ * Large-scale harm: Tests for "dangerous capabilities," such as chemical,
507
+ biological, radiological, and nuclear (CBRN) risks.
508
+
509
+ ### Evaluation Results
510
+
511
+ The results of ethics and safety evaluations are within acceptable thresholds
512
+ for meeting [internal policies][safety-policies] for categories such as child
513
+ safety, content safety, representational harms, memorization, large-scale harms.
514
+ On top of robust internal evaluations, the results of well-known safety
515
+ benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
516
+ are shown here.
517
+
518
+ #### Gemma 2.0
519
+
520
+ | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
521
+ | ------------------------ | ------------- | --------------- | ---------------- |
522
+ | [RealToxicity][realtox] | average | 8.25 | 8.84 |
523
+ | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
524
+ | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
525
+ | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
526
+ | [Winogender][winogender] | top-1 | 79.17 | 77.22 |
527
+ | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
528
+ | [Winobias 1_2][winobias] | | 78.09 | 81.94 |
529
+ | [Winobias 2_2][winobias] | | 95.32 | 97.22 |
530
+ | [Toxigen][toxigen] | | 39.30 | 38.42 |
531
+ | ------------------------ | ------------- | --------------- | ---------------- |
532
+
533
+ ## Usage and Limitations
534
+
535
+ These models have certain limitations that users should be aware of.
536
+
537
+ ### Intended Usage
538
+
539
+ Open Large Language Models (LLMs) have a wide range of applications across
540
+ various industries and domains. The following list of potential uses is not
541
+ comprehensive. The purpose of this list is to provide contextual information
542
+ about the possible use-cases that the model creators considered as part of model
543
+ training and development.
544
+
545
+ * Content Creation and Communication
546
+ * Text Generation: These models can be used to generate creative text formats
547
+ such as poems, scripts, code, marketing copy, and email drafts.
548
+ * Chatbots and Conversational AI: Power conversational interfaces for customer
549
+ service, virtual assistants, or interactive applications.
550
+ * Text Summarization: Generate concise summaries of a text corpus, research
551
+ papers, or reports.
552
+ * Research and Education
553
+ * Natural Language Processing (NLP) Research: These models can serve as a
554
+ foundation for researchers to experiment with NLP techniques, develop
555
+ algorithms, and contribute to the advancement of the field.
556
+ * Language Learning Tools: Support interactive language learning experiences,
557
+ aiding in grammar correction or providing writing practice.
558
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
559
+ by generating summaries or answering questions about specific topics.
560
+
561
+ ### Limitations
562
+
563
+ * Training Data
564
+ * The quality and diversity of the training data significantly influence the
565
+ model's capabilities. Biases or gaps in the training data can lead to
566
+ limitations in the model's responses.
567
+ * The scope of the training dataset determines the subject areas the model can
568
+ handle effectively.
569
+ * Context and Task Complexity
570
+ * LLMs are better at tasks that can be framed with clear prompts and
571
+ instructions. Open-ended or highly complex tasks might be challenging.
572
+ * A model's performance can be influenced by the amount of context provided
573
+ (longer context generally leads to better outputs, up to a certain point).
574
+ * Language Ambiguity and Nuance
575
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
576
+ nuances, sarcasm, or figurative language.
577
+ * Factual Accuracy
578
+ * LLMs generate responses based on information they learned from their
579
+ training datasets, but they are not knowledge bases. They may generate
580
+ incorrect or outdated factual statements.
581
+ * Common Sense
582
+ * LLMs rely on statistical patterns in language. They might lack the ability
583
+ to apply common sense reasoning in certain situations.
584
+
585
+ ### Ethical Considerations and Risks
586
+
587
+ The development of large language models (LLMs) raises several ethical concerns.
588
+ In creating an open model, we have carefully considered the following:
589
+
590
+ * Bias and Fairness
591
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
592
+ biases embedded in the training material. These models underwent careful
593
+ scrutiny, input data pre-processing described and posterior evaluations
594
+ reported in this card.
595
+ * Misinformation and Misuse
596
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
597
+ * Guidelines are provided for responsible use with the model, see the
598
+ [Responsible Generative AI Toolkit][rai-toolkit].
599
+ * Transparency and Accountability:
600
+ * This model card summarizes details on the models' architecture,
601
+ capabilities, limitations, and evaluation processes.
602
+ * A responsibly developed open model offers the opportunity to share
603
+ innovation by making LLM technology accessible to developers and researchers
604
+ across the AI ecosystem.
605
+
606
+ Risks identified and mitigations:
607
+
608
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
609
+ (using evaluation metrics, human review) and the exploration of de-biasing
610
+ techniques during model training, fine-tuning, and other use cases.
611
+ * Generation of harmful content: Mechanisms and guidelines for content safety
612
+ are essential. Developers are encouraged to exercise caution and implement
613
+ appropriate content safety safeguards based on their specific product policies
614
+ and application use cases.
615
+ * Misuse for malicious purposes: Technical limitations and developer and
616
+ end-user education can help mitigate against malicious applications of LLMs.
617
+ Educational resources and reporting mechanisms for users to flag misuse are
618
+ provided. Prohibited uses of Gemma models are outlined in the
619
+ [Gemma Prohibited Use Policy][prohibited-use].
620
+ * Privacy violations: Models were trained on data filtered for removal of PII
621
+ (Personally Identifiable Information). Developers are encouraged to adhere to
622
+ privacy regulations with privacy-preserving techniques.
623
+
624
+ ### Benefits
625
+
626
+ At the time of release, this family of models provides high-performance open
627
+ large language model implementations designed from the ground up for Responsible
628
+ AI development compared to similarly sized models.
629
+
630
+ Using the benchmark evaluation metrics described in this document, these models
631
+ have shown to provide superior performance to other, comparably-sized open model
632
+ alternatives.
633
+
634
+ [rai-toolkit]: https://ai.google.dev/responsible
635
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
636
+ [terms]: https://ai.google.dev/gemma/terms
637
+ [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
638
+ [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
639
+ [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
640
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
641
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
642
+ [sustainability]: https://sustainability.google/operating-sustainably/
643
+ [jax]: https://github.com/google/jax
644
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
645
+ [sustainability]: https://sustainability.google/operating-sustainably/
646
+ [foundation-models]: https://ai.google/discover/foundation-models/
647
+ [gemini-2-paper]: https://goo.gle/gemma2report
648
+ [mmlu]: https://arxiv.org/abs/2009.03300
649
+ [hellaswag]: https://arxiv.org/abs/1905.07830
650
+ [piqa]: https://arxiv.org/abs/1911.11641
651
+ [socialiqa]: https://arxiv.org/abs/1904.09728
652
+ [boolq]: https://arxiv.org/abs/1905.10044
653
+ [winogrande]: https://arxiv.org/abs/1907.10641
654
+ [commonsenseqa]: https://arxiv.org/abs/1811.00937
655
+ [openbookqa]: https://arxiv.org/abs/1809.02789
656
+ [arc]: https://arxiv.org/abs/1911.01547
657
+ [triviaqa]: https://arxiv.org/abs/1705.03551
658
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
659
+ [humaneval]: https://arxiv.org/abs/2107.03374
660
+ [mbpp]: https://arxiv.org/abs/2108.07732
661
+ [gsm8k]: https://arxiv.org/abs/2110.14168
662
+ [realtox]: https://arxiv.org/abs/2009.11462
663
+ [bold]: https://arxiv.org/abs/2101.11718
664
+ [crows]: https://aclanthology.org/2020.emnlp-main.154/
665
+ [bbq]: https://arxiv.org/abs/2110.08193v2
666
+ [winogender]: https://arxiv.org/abs/1804.09301
667
+ [truthfulqa]: https://arxiv.org/abs/2109.07958
668
+ [winobias]: https://arxiv.org/abs/1804.06876
669
+ [math]: https://arxiv.org/abs/2103.03874
670
+ [agieval]: https://arxiv.org/abs/2304.06364
671
+ [big-bench]: https://arxiv.org/abs/2206.04615
672
+ [toxigen]: https://arxiv.org/abs/2203.09509
673
+
674
+