Quantized versions of https://huggingface.co/allenai/OLMo-7B-0424-hf
NB: Q8_K is not supported by default llama.cpp, use Q8_0 instead.
bits per weight vs size plot:
TODO: readme
- Downloads last month
- 164
Model tree for aifoundry-org/OLMo-7B-0424-hf-Quantized
Base model
allenai/OLMo-7B-0424-hf