Corrected eos token
According to tokenizer_config.json the eos token should be <|im_end|>, and according to my own testing this is correct, using the wrong token results in infinite generation responses.
GGUFs get broken by this, compare my GGUFs to all others, I'm guessing transformers
override the value from special_tokens_map.json
and thus function correctly, however llama.cpp
s conversion script gets it from config.json
.
Thank you very much for your reply, we had an update to the tokenizer_config.json file about 6 hours ago, are you still having problems with the model?
Yes, that only addressed the tokenization of <|im_start|>, which was a different issue.
If you use Huggingface's GGUF viewer you can see that all other GGUFs have tokenizer.ggml.eos_token_id set to 2, instead of 7, which is the correct value.
BTW, in case some of the other GGUFs never get updated and someone stumbles upon this PR later, you can use my GGUF Editor to easily set the correct token and download an updated version of the GGUF. :)