Custom GGUF quants of Google’s gemma-2-2b-it, where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
Notes: Great SMOL LLM for on-device inference for mobile devices. 😋
Custom GGUF quants of Google’s gemma-2-2b-it, where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. 🧠🔥🚀
Notes: Great SMOL LLM for on-device inference for mobile devices. 😋