Custom GGUF quants of arcee-ai’s [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. 🧠🔥🚀