runtime error
s/it][A Downloading shards: 100%|ββββββββββ| 9/9 [01:53<00:00, 12.58s/it] Loading checkpoint shards: 0%| | 0/9 [00:00<?, ?it/s][A Loading checkpoint shards: 0%| | 0/9 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 268, in <module> main() File "/home/user/app/app.py", line 262, in main model, tokenizer = _load_model_tokenizer(args) File "/home/user/app/app.py", line 60, in _load_model_tokenizer model = AutoModelForCausalLM.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 511, in from_pretrained return model_class.from_pretrained( File "/home/user/.cache/huggingface/modules/transformers_modules/Qwen/Qwen-Audio-Chat/a52fffad1964a463791eb6a10d354b6de31a069d/modeling_qwen.py", line 1037, in from_pretrained return super().from_pretrained(pretrained_model_name_or_path, *model_args, config=config, cache_dir=cache_dir, File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained ) = cls._load_pretrained_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3471, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 736, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/home/user/.local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_device new_value = value.to(device) File "/home/user/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...