Text Generation
Transformers
PyTorch
Safetensors
gpt2
conversational
text-generation-inference
Inference Endpoints
Xenova HF staff commited on
Commit
061c529
1 Parent(s): 5bb7167

Add default chat template to tokenizer_config.json

Browse files

[Automated] This PR adds the default chat template to the tokenizer config, allowing the model to be used with the new conversational widget (see [PR](https://github.com/huggingface/huggingface.js/pull/457)).

If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.

Files changed (1) hide show
  1. tokenizer_config.json +4 -3
tokenizer_config.json CHANGED
@@ -1,5 +1,6 @@
1
  {
2
- "model_max_length": 2048,
3
- "padding_side": "left",
4
- "truncation_side": "left"
 
5
  }
 
1
  {
2
+ "model_max_length": 2048,
3
+ "padding_side": "left",
4
+ "truncation_side": "left",
5
+ "chat_template": "{{ eos_token }}{{ bos_token }}{% for message in messages %}{% if message['role'] == 'user' %}{{ 'User: ' + message['content']}}{% else %}{{ 'Bot: ' + message['content']}}{% endif %}{{ message['text'] }}{{ bos_token }}{% endfor %}Bot:"
6
  }