Create quant_config.json
#82
by
nnechm
- opened
Beginner :) , but I think this would also enable the possibility of running a Quantized model using autoawq of Mistral AI based on the vllm docs.
This is the error I see:
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader.py", line 67, in get_model
quant_config = get_quant_config(model_config.quantization,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/weight_utils.py", line 114, in get_quant_config
raise ValueError(f"Cannot find the config file for {quantization}")