Upload 3 files
Browse files- .gitattributes +1 -0
- README.md +62 -0
- SEED-Bench_v2_level1_2_3.json +3 -0
- cc3m-image.zip +3 -0
.gitattributes
CHANGED
@@ -53,3 +53,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
+
SEED-Bench_v2_level1_2_3.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
task_categories:
|
4 |
+
- visual-question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pretty_name: SEED-Bench-2
|
8 |
+
size_categories:
|
9 |
+
- 10K<n<100K
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
# SEED-Bench Card
|
15 |
+
|
16 |
+
## Benchmark details
|
17 |
+
|
18 |
+
**Benchmark type:**
|
19 |
+
SEED-Bench-2 is a comprehensive large-scale benchmark for evaluating Multimodal Large Language Models (MLLMs), featuring 24K multiple-choice questions with precise human annotations.
|
20 |
+
It spans 27 evaluation dimensions, assessing both text and image generation.
|
21 |
+
|
22 |
+
|
23 |
+
**Benchmark date:**
|
24 |
+
SEED-Bench was collected in November 2023.
|
25 |
+
|
26 |
+
**Paper or resources for more information:**
|
27 |
+
https://github.com/AILab-CVC/SEED-Bench
|
28 |
+
|
29 |
+
**License:**
|
30 |
+
Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.
|
31 |
+
|
32 |
+
|
33 |
+
Data Sources:
|
34 |
+
- Dimensions 1-9, 23 (In-Context Captioning): Conceptual Captions Dataset (https://ai.google.com/research/ConceptualCaptions/) under its license (https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE). Copyright belongs to the original dataset owner.
|
35 |
+
- Dimension 9 (Text Recognition): ICDAR2003 (http://www.imglab.org/db/index.html), ICDAR2013(https://rrc.cvc.uab.es/?ch=2), IIIT5k(https://cvit.iiit.ac.in/research/projects/cvit-projects/the-iiit-5k-word-dataset), and SVT(http://vision.ucsd.edu/~kai/svt/). Copyright belongs to the original dataset owner.
|
36 |
+
- Dimension 10 (Celebrity Recognition): MME (https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) and MMBench (https://github.com/open-compass/MMBench) under MMBench license (https://github.com/open-compass/MMBench/blob/main/LICENSE). Copyright belongs to the original dataset owners.
|
37 |
+
- Dimension 11 (Landmark Recognition): Google Landmark Dataset v2 (https://github.com/cvdfoundation/google-landmark) under CC-BY licenses without ND restrictions.
|
38 |
+
- Dimension 12 (Chart Understanding): PlotQA (https://github.com/NiteshMethani/PlotQA) under its license (https://github.com/NiteshMethani/PlotQA/blob/master/LICENSE).
|
39 |
+
- Dimension 13 (Visual Referring Expression): VCR (http://visualcommonsense.com) under its license (http://visualcommonsense.com/license/).
|
40 |
+
- Dimension 14 (Science Knowledge): ScienceQA (https://github.com/lupantech/ScienceQA) under its license (https://github.com/lupantech/ScienceQA/blob/main/LICENSE-DATA).
|
41 |
+
- Dimension 15 (Emotion Recognition): FER2013 (https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/data) under its license (https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/rules#7-competition-data).
|
42 |
+
- Dimension 16 (Visual Mathematics): MME (https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) and data from the internet under CC-BY licenses.
|
43 |
+
- Dimension 17 (Difference Spotting): MIMICIT (https://github.com/Luodian/Otter/blob/main/mimic-it/README.md) under its license (https://github.com/Luodian/Otter/tree/main/mimic-it#eggs).
|
44 |
+
- Dimension 18 (Meme Comprehension): Data from the internet under CC-BY licenses.
|
45 |
+
- Dimension 19 (Global Video Understanding): Charades (https://prior.allenai.org/projects/charades) under its license (https://prior.allenai.org/projects/data/charades/license.txt). SEED-Bench-2 provides 8 frames per video.
|
46 |
+
- Dimensions 20-22 (Action Recognition, Action Prediction, Procedure Understanding): Something-Something v2 (https://developer.qualcomm.com/software/ai-datasets/something-something), Epic-Kitchen 100 (https://epic-kitchens.github.io/2023), and Breakfast (https://serre-lab.clps.brown.edu/resource/breakfast-actions-dataset/). SEED-Bench-2 provides 8 frames per video.
|
47 |
+
- Dimension 24 (Interleaved Image-Text Analysis): Data from the internet under CC-BY licenses.
|
48 |
+
- Dimension 25 (Text-to-Image Generation): CC-500 (https://github.com/weixi-feng/Structured-Diffusion-Guidance) and ABC-6k (https://github.com/weixi-feng/Structured-Diffusion-Guidance) under their license (https://github.com/weixi-feng/Structured-Diffusion-Guidance/blob/master/LICENSE), with images generated by Stable-Diffusion-XL (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) under its license (https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md).
|
49 |
+
- Dimension 26 (Next Image Prediction): Epic-Kitchen 100 (https://epic-kitchens.github.io/2023) under its license (https://creativecommons.org/licenses/by-nc/4.0/).
|
50 |
+
- Dimension 27 (Text-Image Creation): Data from the internet under CC-BY licenses.
|
51 |
+
|
52 |
+
Please contact us if you believe any data infringes upon your rights, and we will remove it.
|
53 |
+
|
54 |
+
**Where to send questions or comments about the benchmark:**
|
55 |
+
https://github.com/AILab-CVC/SEED-Bench/issues
|
56 |
+
|
57 |
+
## Intended use
|
58 |
+
**Primary intended uses:**
|
59 |
+
SEED-Bench-2 is primarily designed to evaluate Multimodal Large Language Models in text and image generation tasks.
|
60 |
+
|
61 |
+
**Primary intended users:**
|
62 |
+
Researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence are the main target users of the benchmark.
|
SEED-Bench_v2_level1_2_3.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:49f03c28e2ec953f5a2743c4d35777159469b8eb7593102ddc3e8d60e1907f0a
|
3 |
+
size 18076410
|
cc3m-image.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:87b1e7f6150e14516128563904b65bb0f7a9f2c7c8a9c6a9b215e5449f8384cb
|
3 |
+
size 1684302326
|