runtime error

start app Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:14<00:14, 14.97s/it] Downloading shards: 100%|██████████| 2/2 [00:19<00:00, 9.04s/it] Downloading shards: 100%|██████████| 2/2 [00:19<00:00, 9.93s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 2.41it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 27, in <module> pipe = load_pipeline(DEFAULT_MODEL_NAME) File "/home/user/app/app.py", line 20, in load_pipeline return pipeline( File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 994, in pipeline tokenizer = AutoTokenizer.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 915, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2275, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'ivrit-ai/whisper-13-v2-e2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'ivrit-ai/whisper-13-v2-e2' is the correct path to a directory containing all relevant files for a WhisperTokenizerFast tokenizer.

Container logs:

Fetching error logs...