File size: 6,803 Bytes
172a846
 
 
 
ef36fa4
 
 
172a846
 
ef36fa4
 
 
 
 
 
 
 
172a846
 
ef36fa4
 
ad41925
ef36fa4
6180ba1
 
 
 
 
 
ef36fa4
2462f50
8a83ff8
ef36fa4
 
 
 
 
 
 
 
 
 
 
 
32becce
 
 
 
 
ef36fa4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172a846
ef36fa4
 
 
 
 
 
 
172a846
ef36fa4
 
 
 
 
5cd5354
ef36fa4
 
172a846
ef36fa4
 
 
 
 
 
172a846
ef36fa4
 
 
172a846
ef36fa4
 
172a846
ef36fa4
 
 
172a846
6180ba1
 
 
 
 
2462f50
6180ba1
2462f50
6180ba1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef36fa4
172a846
ef36fa4
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
license: apache-2.0
language:
- en
base_model:
- openai/clip-vit-large-patch14-336
- allenai/OLMoE-1B-7B-0924
datasets:
- allenai/OLMoE-mix-0924
pipeline_tag: image-text-to-text
tags:
  - multimodal
  - moe
  - olmo
  - olmoe
  - molmo
  - molmoe
---

<img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;">

# MolmoE 1B


Molmo is a family of open vision-language models developed by the Allen Institute for AI. 
Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. 
It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. 
You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog).

MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters based on [OLMoE-1B-7B-0924](https://huggingface.co/allenai/OLMoE-1B-7B-0924). 
It nearly matches the performance of GPT-4V on both academic benchmarks and human evaluation, and achieves state-of-the-art performance among similarly-sized open multimodal models.

This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.

**[Sign up here](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform)** to be the first to know when artifacts are released.



## Quick Start

To run MolmoE, first install dependencies:

```bash
# uninstall all tensorflow packages
pip list --format=freeze | grep '^tensorflow' | cut -d= -f1 | xargs -n1 pip uninstall -y

# install CPU-only version of tensorflow; used for image preprocessing
pip install einops tensorflow-cpu torchvision
```

Then, follow these steps:

```python
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests

# load the processor
processor = AutoProcessor.from_pretrained(
    'allenai/MolmoE-1B-0924',
    trust_remote_code=True,
    torch_dtype='auto',
    device_map='auto'
)

# load the model
model = AutoModelForCausalLM.from_pretrained(
    'allenai/MolmoE-1B-0924',
    trust_remote_code=True,
    torch_dtype='auto',
    device_map='auto'
)

# process the image and text
inputs = processor.process(
    images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
    text="Describe this image."
)

# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}

# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
    inputs,
    GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
    tokenizer=processor.tokenizer
)

# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)

# print the generated text
print(generated_text)

# >>> This photograph captures an adorable black Labrador puppy sitting on a weathered
#     wooden deck. The deck's planks, which are a mix of light and dark brown with ...
```

## Evaluations

| Model                       | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|-----------------------------|-----------------------------------------|-----------------------------|
| Molmo 72B                   | 81.2                                    | 1077                        |
| Molmo 7B-D                  | 77.3                                    | 1056                        |
| Molmo 7B-O                  | 74.6                                    | 1051                        |
| **MolmoE 1B (this model)**  | **68.6**                                | **1032**                    |
| GPT-4o                      | 78.5                                    | 1079                        |
| GPT-4V                      | 71.1                                    | 1041                        |
| Gemini 1.5 Pro              | 78.3                                    | 1074                        |
| Gemini 1.5 Flash            | 75.1                                    | 1054                        |
| Claude 3.5 Sonnet           | 76.7                                    | 1069                        |
| Claude 3 Opus               | 66.4                                    |  971                        |
| Claude 3 Haiku              | 65.3                                    |  999                        |
| Qwen VL2 72B                | 79.4                                    | 1037                        |
| Qwen VL2 7B                 | 73.7                                    | 1025                        |
| Intern VL2 LLAMA 76B        | 77.1                                    | 1018                        |
| Intern VL2 8B               | 69.4                                    |  953                        |
| Pixtral 12B                 | 69.5                                    | 1016                        |
| Phi3.5-Vision 4B            | 59.7                                    |  982                        |
| PaliGemma 3B                | 50.0                                    |  937                        |
| LLAVA OneVision 72B         | 76.6                                    | 1051                        |
| LLAVA OneVision 7B          | 72.0                                    | 1024                        |
| Cambrian-1 34B              | 66.8                                    |  953                        |
| Cambrian-1 8B               | 63.4                                    |  952                        |
| xGen - MM - Interleave 4B   | 59.5                                    |  979                        |
| LLAVA-1.5 13B               | 43.9                                    |  960                        |
| LLAVA-1.5 7B                | 40.7                                    |  951                        |

*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*

## License and Use

This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).