File size: 2,595 Bytes
17c374d
 
2d2ece9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ecf2cbf
2d2ece9
d85fd0b
ecf2cbf
2d2ece9
 
 
 
 
ecf2cbf
 
17c374d
 
 
 
aae7cb4
 
 
17c374d
 
aae7cb4
17c374d
bd3bf64
aae7cb4
 
17c374d
aae7cb4
 
 
 
 
 
 
 
 
 
17c374d
aae7cb4
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
license: cc-by-nc-4.0
dataset_info:
  features:
  - name: image
    dtype: image
  - name: query
    dtype: string
  - name: relevant
    dtype: int64
  - name: clip_score
    dtype: float64
  - name: inat24_image_id
    dtype: int64
  - name: inat24_file_name
    dtype: string
  - name: supercategory
    dtype: string
  - name: category
    dtype: string
  - name: iconic_group
    dtype: string
  - name: inat24_category_id
    dtype: int64
  - name: inat24_category_name
    dtype: string
  - name: latitude
    dtype: float64
  - name: longitude
    dtype: float64
  - name: location_uncertainty
    dtype: float64
  - name: date
    dtype: string
  - name: license
    dtype: string
  - name: rights_holder
    dtype: string
  splits:
  - name: train
    num_bytes: 1633954421
    num_examples: 16100
  download_size: 1507625576
  dataset_size: 1633954421
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
size_categories:
- 10K<n<100K
---

# INQUIRE-Rerank

**Please note that this is dataset is preliminary, and will be updated soon.**


<!-- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630b1e44cd26ad7f60d490e2/dQBEuQJz46CN5yM7Hz_pq.jpeg) -->

<!-- <img src="https://cdn-uploads.huggingface.co/production/uploads/630b1e44cd26ad7f60d490e2/CIFPqSwwkSSZo0zMoQOCr.jpeg" style="width:100%;max-width:1000px"/> -->

<!-- **INQUIRE: A Natural World Text-to-Image Retrieval Benchmark** -->

INQUIRE is a text-to-image retrieval benchmark designed to challenge multimodal models with expert-level queries about the natural world.

This dataset aims to emulate real world image retrieval and analysis problems faced by scientists working with large-scale image collections.
Therefore, we hope that INQUIRE will both encourage and track advancements in the real scientific utility of AI systems.

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630b1e44cd26ad7f60d490e2/CIFPqSwwkSSZo0zMoQOCr.jpeg)

**Dataset Details**

The **INQUIRE-Rerank** task fixes an initial ranking of 100 images per query, obtained using CLIP ViT-H-14 zero-shot retrieval on the entire 5 million image iNat24 dataset.
This fixed starting point makes reranking evaluation consistent, and saves time from running the initial retrieval yourself. If you're interested in full-dataset retrieval,
check out **INQUIRE-Fullrank**.

**Dataset Sources**
- Website: [https://inquire-benchmark.github.io/](https://inquire-benchmark.github.io/)
- Repository: [https://github.com/inquire-benchmark/INQUIRE](https://github.com/inquire-benchmark/INQUIRE)