NeMo
English
nvidia
llama3.1

Model Config Error on VLLM?

#3
by rodMetal - opened

Anyone getting this error?

"ValueError: No supported config format found in nvidia/Llama-3.1-Nemotron-70B-Instruct"

I was getting the same error. I went and used this one instead, works just fine:

https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF

Not sure what the difference really is between these 2 repos, but the -HF one seems to have the params (safetensors).

Sign up or log in to comment