runtime error

Exit code: 1. Reason: artnerAIPRO: - configuration_phi3_v.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. A new version of the following files was downloaded from https://huggingface.co/DrChamyoung/PartnerAIPRO: - modeling_phi3_v.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:13<00:13, 13.50s/it] Downloading shards: 100%|██████████| 2/2 [00:23<00:00, 11.18s/it] Downloading shards: 100%|██████████| 2/2 [00:23<00:00, 11.53s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> "DrChamyoung/PartnerAIPRO": AutoModelForCausalLM.from_pretrained("DrChamyoung/PartnerAIPRO", trust_remote_code=True, torch_dtype="auto", _attn_implementation="flash_attention_2").cuda().eval() File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3769, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1519, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1630, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.

Container logs:

Fetching error logs...