runtime error
[06:21<00:05, 26.0MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 98%|ββββββββββ| 8.00G/8.14G [06:21<00:05, 27.1MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 98%|ββββββββββ| 8.01G/8.14G [06:22<00:05, 21.9MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.02G/8.14G [06:22<00:04, 27.0MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.03G/8.14G [06:22<00:03, 30.4MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.04G/8.14G [06:23<00:02, 36.8MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.05G/8.14G [06:23<00:02, 29.1MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.06G/8.14G [06:24<00:02, 27.4MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.07G/8.14G [06:24<00:01, 31.6MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 99%|ββββββββββ| 8.08G/8.14G [06:24<00:01, 29.0MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 100%|ββββββββββ| 8.11G/8.14G [06:25<00:01, 29.1MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 100%|ββββββββββ| 8.12G/8.14G [06:26<00:00, 24.2MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 100%|ββββββββββ| 8.14G/8.14G [06:27<00:00, 22.0MB/s] wizard-vicuna-13B.ggmlv3.q4_1.bin: 100%|ββββββββββ| 8.14G/8.14G [06:27<00:00, 21.0MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--wizard-vicuna-13B-GGML/snapshots/18c48a2979551dbc957dc95638384db5f9f63400/wizard-vicuna-13B.ggmlv3.q4_1.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/tabbed.py", line 25, in <module> llm = Llama(model_path=fp, **config["llama_cpp"]) File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 365, in __init__ assert self.model is not None AssertionError
Container logs:
Fetching error logs...