Text Generation
Transformers
PyTorch
Safetensors
gpt2
conversational
text-generation-inference
Inference Endpoints

Add `max_length` to model config

#3
by saattrupdan - opened
AI Sweden Model Hub org

This is required by the conversational pipeline.

Ekgren changed pull request status to merged

Sign up or log in to comment