Edit model card

GGUF models for gemma2.java

Pure .gguf Q4_0 and Q8_0 quantizations of Gemma 2 models, ready to consume by gemma2.java.

In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:

./llama-quantize --pure ./Gemma-2-2B-Instruct-F32.gguf ./Gemma-2-2B-Instruct-Q4_0.gguf Q4_0

Gemma Model Card

Model Page: Gemma

This model card corresponds to the 2b instruct version the Gemma 2 model in GGUF Format.

You can also visit the model card of the 2B pretrained v2 model GGUF.

Model Information

Summary description and brief definition of inputs and outputs.

Description

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.

Downloads last month
52
GGUF
Model size
2.61B params
Architecture
gemma2

4-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mukel/Gemma-2-2B-Instruct-GGUF

Base model

google/gemma-2-2b
Quantized
(88)
this model

Collection including mukel/Gemma-2-2B-Instruct-GGUF