Spaces:
Running
Using Quantized Models with VLLM CPU Backend
Hi, I want to use your quantized models (specifically Meta-Llama-3.1-70B-Instruct-quantized.w8a8) with vLLM. However, I intend to use it with the CPU or Openvino backend. Original models could be used on the same configuration. However, vLLM backend gives me an error. I think that is model-specific problem but I cannot identify it. Is there any chance to help me or could you reproduce this problem?
'''
Exception in worker VllmWorkerProcess while processing method load_model: , Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
output = executor(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/cpu_worker.py", line 217, in load_model
self.model_runner.load_model()
File "/usr/local/lib/python3.10/dist-packages/vllm/worker/cpu_model_runner.py", line 125, in load_model
self.model = get_model(model_config=self.model_config,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model
return loader.load_model(model_config=model_config,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 341, in load_model
model = _initialize_model(model_config, self.load_config,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 174, in _initialize_model
quant_config=_get_quantization_config(model_config, load_config),
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 98, in _get_quantization_config
capability = current_platform.get_device_capability()
File "/usr/local/lib/python3.10/dist-packages/vllm/platforms/interface.py", line 28, in get_device_capability
raise NotImplementedError
NotImplementedError
'''