Gemma-2-27b-it ?
#2
by
VlSav
- opened
Hi! Thanks for quantization!
Is there any chance you will do that same for 27B gemma-2 ?
@VlSav we uploaded https://huggingface.co/ModelCloud/gemma-2-27b-it-gptq-4bit, please check inference examples.
@lrl-modelcloud Thank a lot! Working fine so far.
VlSav
changed discussion status to
closed