runtime error
bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/user/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 adapter_model.safetensors: 0%| | 0.00/6.30M [00:00<?, ?B/s][A adapter_model.safetensors: 100%|██████████| 6.30M/6.30M [00:00<00:00, 49.1MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 32, in <module> ai_roles_obj[ai_role_en] = ChatHaruhi( File "/home/user/app/ChatHaruhi/ChatHaruhi.py", line 64, in __init__ self.llm, self.tokenizer = self.get_models(llm) File "/home/user/app/ChatHaruhi/ChatHaruhi.py", line 266, in get_models return (Qwen118k2GPT(model = "silk-road/" + model_name), Qwen_tokenizer) File "/home/user/app/ChatHaruhi/Qwen118k2GPT.py", line 57, in __init__ self.model, self.tokenizer = initialize_Qwen2LORA(model) File "/home/user/app/ChatHaruhi/Qwen118k2GPT.py", line 17, in initialize_Qwen2LORA model_qwen = AutoModelForCausalLM.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3787, in from_pretrained model.load_adapter( File "/home/user/.local/lib/python3.10/site-packages/transformers/integrations/peft.py", line 222, in load_adapter self._dispatch_accelerate_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/integrations/peft.py", line 471, in _dispatch_accelerate_model dispatch_model( File "/home/user/.local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 438, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.
Container logs:
Fetching error logs...