Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints

Update config.json

#24
by rcaulk - opened

Llama2 allows 4096 context length, should be increased here.

WizardLM changed pull request status to merged

Sign up or log in to comment