LMMs-Eval
Collection
Dataset Collection of LMMs-Eval
•
36 items
•
Updated
•
25
id
stringlengths 4
7
| image
imagewidth (px) 276
640
|
---|---|
n473704 | |
n104777 | |
n91896 | |
n287878 | |
n163567 | |
n236639 | |
n222606 | |
n382526 | |
n348119 | |
n28227 | |
n421461 | |
n160075 | |
n574378 | |
n74889 | |
n2427 | |
n49490 | |
n67898 | |
n266691 | |
n115255 | |
n366212 | |
n348762 | |
n480510 | |
n107909 | |
n217927 | |
n11185 | |
n69762 | |
n225292 | |
n345750 | |
n311775 | |
n122223 | |
n248915 | |
n267602 | |
n400441 | |
n215042 | |
n253841 | |
n320771 | |
n213994 | |
n377227 | |
n104364 | |
n163088 | |
n287296 | |
n306254 | |
n391959 | |
n525198 | |
n386787 | |
n63402 | |
n180069 | |
n255636 | |
n342923 | |
n554697 | |
n26870 | |
n258948 | |
n481175 | |
n182692 | |
n554609 | |
n480886 | |
n571477 | |
n284968 | |
n238362 | |
n204544 | |
n137324 | |
n345896 | |
n520465 | |
n444609 | |
n294894 | |
n521434 | |
n486283 | |
n125721 | |
n499337 | |
n359964 | |
n56880 | |
n204485 | |
n32881 | |
n266191 | |
n1595 | |
n2553 | |
n3469 | |
n488353 | |
n527036 | |
n123404 | |
n115628 | |
n21395 | |
n398072 | |
n134809 | |
n329389 | |
n258260 | |
n566544 | |
n517784 | |
n73160 | |
n474323 | |
n313103 | |
n272554 | |
n4638 | |
n512639 | |
n278909 | |
n255927 | |
n508587 | |
n263494 | |
n569843 | |
n14349 |
Accelerating the development of large-scale multi-modality models (LMMs) with
lmms-eval
🏠 Homepage | 📚 Documentation | 🤗 Huggingface Datasets
This is a formatted version of GQA. It is used in our lmms-eval
pipeline to allow for one-click evaluations of large multi-modality models.
@inproceedings{hudson2019gqa,
title={Gqa: A new dataset for real-world visual reasoning and compositional question answering},
author={Hudson, Drew A and Manning, Christopher D},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={6700--6709},
year={2019}
}