This fixes the tokenizer so that they can be converted using convert-hf-to-gguf.py.

I used this fix to make non-Imatrix quants here: https://huggingface.co/HiroseKoichi/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment