FIM with Ollama

#2
by danielus - opened

Does this model support FIM when used with Continue.dev on VS Code? I tried the Base and Instruction versions, but they don’t seem to work

FYI it works, but needs a workaround currently:
https://huggingface.co/bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/discussions/3

Hopefully this will be fixed soon either due to a workaround in the GGUFs, or on the llama.cpp side, ideally upstream Qwen would fix their configs.

Sign up or log in to comment