The inference api returns inComplete response
#8
by
aidan377
- opened
When I use the inference api,it return very short answer,can you help to figure out the reason?Do I use wrong code?
The code to request:
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# print(prompt)
output = query({
"inputs": prompt,
"parameters":{
"max_new_token":256,
"temperature":0.2,
"do_sample":True,
"top_k":50,
"top_p":0.95,
"eos_token_id":49155,
"return_full_text":False
}
})
The response:
[{'generated_text': '\nThere are multiple ways to sort a list in Python. One of the most common ways is to'}]
try to use "return_full_text":True
try to use "return_full_text":True
I have tried,the same results.Is there any limitation of api usage?