The different result with raw model and demo
In some chinese case, if i use raw model( with advanced parameters , temp=0.7), the model can not work. But in demo, everything is ok.
Any diff with raw model and demo? (such as decode parameters)
Hi, if demo you mean https://huggingface.co/spaces/ethux/Mistral-Pixtral-Demo this one, then you can see the source code here: https://huggingface.co/spaces/ethux/Mistral-Pixtral-Demo/blob/main/app.py
There is no big difference, the temperature used for the demo is 0.45
Hi, if demo you mean https://huggingface.co/spaces/ethux/Mistral-Pixtral-Demo this one, then you can see the source code here: https://huggingface.co/spaces/ethux/Mistral-Pixtral-Demo/blob/main/app.py
There is no big difference, the temperature used for the demo is 0.45
Thanks for your reply.
But in my test case, if i use prompt like follows:
"""
sampling_params = SamplingParams(max_tokens=8192, temperature=0.45)
llm = LLM(model=model_name, tokenizer_mode="mistral")
prompt = "颜色"
image_url = "https://picsum.photos/id/237/200/300"
messages = [
{
"role": "user",
"content": [{"type": "text", "text": prompt}, {"type": "image_url", "image_url": {"url": image_url}}]
},
]
"""
Actually, you can use any image. It will occur an error " [rank0]: OverflowError: Error in model execution: out of range integral type conversion attempted".
vllm 0.6.3
mistral_common 1.4.4
and this is my envs setting. Any suggestion?