runtime error
Exit code: 1. Reason: Initializing Chat /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Loading VIT 0%| | 0.00/1.89G [00:00<?, ?B/s][A 5%|β | 99.8M/1.89G [00:01<00:18, 105MB/s][A 16%|ββ | 300M/1.89G [00:02<00:10, 167MB/s] [A 25%|βββ | 492M/1.89G [00:03<00:08, 182MB/s][A 34%|ββββ | 666M/1.89G [00:04<00:07, 172MB/s][A 43%|βββββ | 832M/1.89G [00:05<00:06, 172MB/s][A 54%|ββββββ | 1.01G/1.89G [00:06<00:05, 185MB/s][A 64%|βββββββ | 1.21G/1.89G [00:07<00:03, 193MB/s][A 73%|ββββββββ | 1.39G/1.89G [00:08<00:02, 189MB/s][A 83%|βββββββββ | 1.56G/1.89G [00:09<00:02, 162MB/s][A 91%|βββββββββ | 1.72G/1.89G [00:11<00:01, 154MB/s][A 99%|ββββββββββ| 1.87G/1.89G [00:12<00:00, 142MB/s][A 100%|ββββββββββ| 1.89G/1.89G [00:12<00:00, 163MB/s] Loading VIT Done Loading Q-Former Traceback (most recent call last): File "/home/user/app/app.py", line 67, in <module> model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) File "/home/user/app/video_llama/models/video_llama.py", line 395, in from_config model = cls( File "/home/user/app/video_llama/models/video_llama.py", line 114, in __init__ self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False, use_auth_token=os.environ["API_TOKEN"]) File "/usr/local/lib/python3.10/os.py", line 680, in __getitem__ raise KeyError(key) from None KeyError: 'API_TOKEN'
Container logs:
Fetching error logs...