16-bit gguf version of https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
For quantized versions, see https://huggingface.co/models?search=thebloke/llama-2-7b-chat
- Downloads last month
- 21
Model tree for pcuenq/Llama-2-7b-chat-gguf
Base model
meta-llama/Llama-2-7b-chat-hf