multi-image inference
#45
by
eternal8848
- opened
how can I inference multiimage with llama3.2? Can you give me an example
same question...
in ollama, if you set up like this: messages=[{ 'images': '1.jpg', '2.jpg' }] , it gives: vision model only supports a single image per message
No they don't support it.
https://github.com/meta-llama/llama-models/issues/223
it seems vLLM supports multi-image llama 3.2? https://docs.vllm.ai/en/latest/models/supported_models.html