Fix eos_token in tokenizer_config.json

#3
by BM-TNG - opened

eos_token should likely be "<|endoftext|>" for the base model, to be consistent with config.json and generation_config.json.
Without this fix, text generation does not stop when using fill-in-the-middle prompts.

This comment has been hidden
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment