Questions about Ollama: Running Hugging Face GGUF models
Hello everyone,
- Recently, I heard that Hugging Face GGUF models can be directly executed using Ollama commands. However, when I check the model pages with GGUF files, the "Use this model" dropdown only shows options like llama.cpp, LM Studio, Jan, and vLLM, but there’s no option for Ollama. Why is that?
I also went to Hugging Face local apps settings and enabled the checkbox for Ollama, but the issue persists. Can anyone explain why this happens?
- Additionally, some model pages only provide .safetensors files. Can these still be used with Ollama for running the models directly?
E.g. https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/tree/main
Thank you!
@NCGWRjason Looking at 1.
Which 1?
Sorry, I don’t quite understand your response.
What I meant is, why can’t I see the Ollama option in the "Use this model" dropdown list? It doesn’t appear like in the options shown in other people's YouTube videos.
Thank you!
Hi
@NCGWRjason
,1
refers to the first numbered item in your original post (there's a second question about safetensors files). There's a bug that the team is fixing, thanks a lot for your report!
Hi @NCGWRjason ,
1
refers to the first numbered item in your original post (there's a second question about safetensors files). There's a bug that the team is fixing, thanks a lot for your report!
Thank you for your reply. So, is the absence of the Ollama option in the "Use this model" dropdown list possibly a rare webpage bug?
Also, some model pages only provide .safetensors files. Can these still be used with Ollama for running the models directly?
For example: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/tree/main
Thanks!
Also, some model pages only provide .safetensors files. Can these still be used with Ollama for running the models directly?
For example: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/tree/main
Hi there, no, they cannot, since Ollama doesn't support that however, you can look for the corresponding GGUF file on the hub via the search: https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
@NCGWRjason the bug preventing Ollama to be displayed in the list should be fixed now
Thank you very much! I saw the ollama icon
However, when I try to download this model via CMD, I encounter the following issue.
Could you help me understand why this happens?
I want to down load the only gguf from this model
https://huggingface.co/taide/TAIDE-LX-7B-Chat-4bit/tree/main
Do I need any special authentication to download GGUF files via CMD?
Thank you!
ollama run hf.co/taide/TAIDE-LX-7B-Chat-4bit
pulling manifest
Error: pull model manifest: Get "Authentication%20required?nonce=8DcJxKgCWG_mWK7LUOnfsw&scope=&service=&ts=1732111019": unsupported protocol scheme ""