How can I batchfy my input (img, questions) to the model?

#19
by dingbang777 - opened

I wonder how to batchfy my input (img, questions) to the model? Can anyone tell me whether it has the api or function or any method? I need it to accelerate my data annotating.

OpenBMB org
β€’
edited Aug 26
image1 = Image.open('xx.jpg').convert('RGB')
question1 = 'What is in the image?'
image2 = Image.open('xx.jpg').convert('RGB')
question2 = 'What is in the image?'

msgs = [
    [{'role': 'user', 'content': [image1, question1]}],
    [{'role': 'user', 'content': [image2, question2]}],
]

res = model.chat(
    image=None,
    msgs=msgs,
    tokenizer=tokenizer
)
print(res)

Thanks, also Is there any limit for the batch size ?

the result is different from chating one by one

Sign up or log in to comment