ColPali
ONNX
English
paligemma
vidore
akshayballal commited on
Commit
46fd458
1 Parent(s): 9d76ed3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: colpali
4
+ base_model: vidore/colpaligemma-3b-pt-448-base
5
+ language:
6
+ - en
7
+ tags:
8
+ - vidore
9
+ datasets:
10
+ - vidore/colpali_train_set
11
+ ---
12
+
13
+ Note: This is a FP16 ONNX model of ColPali.
14
+
15
+ # ColPali: Visual Retriever based on PaliGemma-3B with ColBERT strategy
16
+
17
+ ColPali is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
18
+ It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
19
+ It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
20
+
21
+ <p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
22
+
23
+ ## Version specificity
24
+
25
+ > [!NOTE]
26
+ > This version is similar to [`vidore/colpali-v1.2`](https://huggingface.co/vidore/colpali-v1.2), except that the LoRA adapter was merged into the base model. Thus, loading ColPali from this checkpoint saves you the trouble of merging the pre-trained adapter yourself.
27
+ >
28
+ > This can be useful if you want to train a new adpter from scratch.
29
+
30
+ ## Model Description
31
+
32
+ This model is built iteratively starting from an off-the-shelf [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) model.
33
+ We finetuned it to create [BiSigLIP](https://huggingface.co/vidore/bisiglip) and fed the patch-embeddings output by SigLIP to an LLM, [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) to create [BiPali](https://huggingface.co/vidore/bipali).
34
+
35
+ One benefit of inputting image patch embeddings through a language model is that they are natively mapped to a latent space similar to textual input (query).
36
+ This enables leveraging the [ColBERT](https://arxiv.org/abs/2004.12832) strategy to compute interactions between text tokens and image patches, which enables a step-change improvement in performance compared to BiPali.
37
+
38
+ ## Model Training
39
+
40
+ ### Dataset
41
+ Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
42
+ Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
43
+ A validation set is created with 2% of the samples to tune hyperparameters.
44
+
45
+ *Note: Multilingual data is present in the pretraining corpus of the language model (Gemma-2B) and potentially occurs during PaliGemma-3B's multimodal training.*
46
+
47
+ ### Parameters
48
+
49
+ All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
50
+ with `alpha=32` and `r=32` on the transformer layers from the language model,
51
+ as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
52
+ We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
53
+
54
+ ## Usage
55
+
56
+ Install [`colpali-engine`](https://github.com/illuin-tech/colpali):
57
+
58
+ ```bash
59
+ pip install colpali-engine>=0.3.0,<0.4.0
60
+ ```
61
+
62
+ Then run the following code:
63
+
64
+ ```python
65
+ from typing import cast
66
+
67
+ import torch
68
+ from PIL import Image
69
+
70
+ from colpali_engine.models import ColPali, ColPaliProcessor
71
+
72
+ model_name = "vidore/colpali-v1.2-merged"
73
+
74
+ model = ColPali.from_pretrained(
75
+ model_name,
76
+ torch_dtype=torch.bfloat16,
77
+ device_map="cuda:0", # or "mps" if on Apple Silicon
78
+ ).eval()
79
+ processor = ColPaliProcessor.from_pretrained(model_name)
80
+
81
+ # Your inputs
82
+ images = [
83
+ Image.new("RGB", (32, 32), color="white"),
84
+ Image.new("RGB", (16, 16), color="black"),
85
+ ]
86
+ queries = [
87
+ "Is attention really all you need?",
88
+ "Are Benjamin, Antoine, Merve, and Jo best friends?",
89
+ ]
90
+
91
+ # Process the inputs
92
+ batch_images = processor.process_images(images).to(model.device)
93
+ batch_queries = processor.process_queries(queries).to(model.device)
94
+
95
+ # Forward pass
96
+ sess = ort.InferenceSession("akshayballal/colpali-v1.2-merged-onnx")
97
+ image_embeddings = sess.run([sess.get_outputs()[0].name],{"input_ids":batch_images['input_ids'].numpy(),"pixel_values":batch_images['pixel_values'].numpy(),"attention_mask":batch_images['attention_mask'].numpy()})[0]
98
+
99
+ pixel_values = np.zeros((batch_queries['input_ids'].shape[0],3,448,448), dtype=np.float32) # Dummy pixel values
100
+ query_embeddings = sess.run([sess.get_outputs()[0].name],{"input_ids":batch_queries['input_ids'].numpy(),"pixel_values":pixel_values,"attention_mask":batch_queries['attention_mask'].numpy()})[0]
101
+ query_embeddings = np.array(query_embeddings)
102
+
103
+ ```
104
+
105
+ ## Limitations
106
+
107
+ - **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
108
+ - **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
109
+
110
+ ## License
111
+
112
+ ColPali's vision language backbone model (PaliGemma) is under `gemma` license as specified in its [model card](https://huggingface.co/google/paligemma-3b-mix-448).
113
+ Because the pre-trained adapter got merged in this model, the license for these weights are also under the `gemma` license
114
+
115
+ ## Contact
116
+
117
+ - Manuel Faysse: [email protected]
118
+ - Hugues Sibille: [email protected]
119
+ - Tony Wu: [email protected]
120
+
121
+ ## Citation
122
+
123
+ If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
124
+
125
+ ```bibtex
126
+ @misc{faysse2024colpaliefficientdocumentretrieval,
127
+ title={ColPali: Efficient Document Retrieval with Vision Language Models},
128
+ author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
129
+ year={2024},
130
+ eprint={2407.01449},
131
+ archivePrefix={arXiv},
132
+ primaryClass={cs.IR},
133
+ url={https://arxiv.org/abs/2407.01449},
134
+ }
135
+ ```