Update chat template
#1
by
CISCai
- opened
I know it's a bit of a pain, but could you update the chat template to the latest chat templates now that llama.cpp supports it?
At least you won't have to requantize everything as I made a handy script that lets you create a new GGUF using the updated tokenizer_config.json file, see the details in the PR. :)
PS: You only have to update the first file in a split GGUF.
Finally done due to re-quantizing from scratch for the new BPE-tokenizer fixes. 🚀
qwp4w3hyb
changed discussion status to
closed