This model is shown as quantization of base Qwen/Qwen2.5-Coder-32B, not Qwen/Qwen2.5-Coder-32B-Instruct

#1
by EmilPi - opened

Because of this I couldn't find any exllamaV2 quants excluding some unknown guy.

Whoops.. that's on my script, sorry about that :D

Sign up or log in to comment