GGUF models of the following model : https://huggingface.co/mridul3301/BioMistral-7B-finetuned
3 format of quantization:
- fp8
- fp16
- fp32
Converted the safetensors to GGUF for inference in CPU using llama_cpp
- Downloads last month
- 15
GGUF models of the following model : https://huggingface.co/mridul3301/BioMistral-7B-finetuned
3 format of quantization:
Converted the safetensors to GGUF for inference in CPU using llama_cpp