Fixed tokenizer
#6
by
bullerwins
- opened
Hi!
Would this models need to be requantized for the fixed tokenizer?
https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/28/files
Hi here @bullerwins thanks for flagging, in this case there's no need to re-quantize the model weights, but to update the tokenizer to make sure we're aligned, I'll do so and close this issue when done! 🤗
Thanks/Gracias Álvaro!