File size: 282 Bytes
46874c4
 
5cdb018
1
2
3
4
Custom GGUF quants of Google’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀 

Notes: Great SMOL LLM for on-device inference for mobile devices. 😋