gemma-2-2b-it-GGUF / README.md
ZeroWw's picture
Upload folder using huggingface_hub
70d7814 verified
metadata
license: mit
language:
  - en
pipeline_tag: text-generation

My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.

Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.

Updated on: Thu Aug 01, 10:58:32