Text Generation
Transformers
PyTorch
Safetensors
gpt2
conversational
text-generation-inference
Inference Endpoints

Add default chat template to tokenizer_config.json

#5
by Xenova HF staff - opened

[Automated] This PR adds the default chat template to the tokenizer config, allowing the model to be used with the new conversational widget (see PR).

  If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.
Ekgren changed pull request status to merged

Sign up or log in to comment