First push of custom handler for Blip2 model to be used in Inference API
Browse files- README.md +163 -0
- config.json +255 -0
- handler.py +46 -0
- preprocessor_config.json +24 -0
README.md
ADDED
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
tags:
|
5 |
+
- vision
|
6 |
+
- image-to-text
|
7 |
+
- image-captioning
|
8 |
+
- visual-question-answering
|
9 |
+
pipeline_tag: image-to-text
|
10 |
+
---
|
11 |
+
|
12 |
+
# BLIP-2, OPT-2.7b, pre-trained only
|
13 |
+
|
14 |
+
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
|
15 |
+
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
|
16 |
+
|
17 |
+
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
18 |
+
|
19 |
+
## Model description
|
20 |
+
|
21 |
+
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
|
22 |
+
|
23 |
+
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
|
24 |
+
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
|
25 |
+
which bridge the gap between the embedding space of the image encoder and the large language model.
|
26 |
+
|
27 |
+
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
|
28 |
+
|
29 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
|
30 |
+
alt="drawing" width="600"/>
|
31 |
+
|
32 |
+
This allows the model to be used for tasks like:
|
33 |
+
|
34 |
+
- image captioning
|
35 |
+
- visual question answering (VQA)
|
36 |
+
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
|
37 |
+
|
38 |
+
## Direct Use and Downstream Use
|
39 |
+
|
40 |
+
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
|
41 |
+
fine-tuned versions on a task that interests you.
|
42 |
+
|
43 |
+
## Bias, Risks, Limitations, and Ethical Considerations
|
44 |
+
|
45 |
+
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
|
46 |
+
|
47 |
+
> Like other large language models for which the diversity (or lack thereof) of training
|
48 |
+
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
|
49 |
+
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
|
50 |
+
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
|
51 |
+
> large language models.
|
52 |
+
>
|
53 |
+
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
|
54 |
+
|
55 |
+
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
|
56 |
+
|
57 |
+
|
58 |
+
### How to use
|
59 |
+
|
60 |
+
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
|
61 |
+
|
62 |
+
#### Running the model on CPU
|
63 |
+
|
64 |
+
<details>
|
65 |
+
<summary> Click to expand </summary>
|
66 |
+
|
67 |
+
```python
|
68 |
+
import requests
|
69 |
+
from PIL import Image
|
70 |
+
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
71 |
+
|
72 |
+
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
73 |
+
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
|
74 |
+
|
75 |
+
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
76 |
+
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
77 |
+
|
78 |
+
question = "how many dogs are in the picture?"
|
79 |
+
inputs = processor(raw_image, question, return_tensors="pt")
|
80 |
+
|
81 |
+
out = model.generate(**inputs)
|
82 |
+
print(processor.decode(out[0], skip_special_tokens=True))
|
83 |
+
```
|
84 |
+
</details>
|
85 |
+
|
86 |
+
#### Running the model on GPU
|
87 |
+
|
88 |
+
##### In full precision
|
89 |
+
|
90 |
+
<details>
|
91 |
+
<summary> Click to expand </summary>
|
92 |
+
|
93 |
+
```python
|
94 |
+
# pip install accelerate
|
95 |
+
import requests
|
96 |
+
from PIL import Image
|
97 |
+
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
98 |
+
|
99 |
+
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
100 |
+
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
|
101 |
+
|
102 |
+
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
103 |
+
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
104 |
+
|
105 |
+
question = "how many dogs are in the picture?"
|
106 |
+
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
|
107 |
+
|
108 |
+
out = model.generate(**inputs)
|
109 |
+
print(processor.decode(out[0], skip_special_tokens=True))
|
110 |
+
```
|
111 |
+
</details>
|
112 |
+
|
113 |
+
##### In half precision (`float16`)
|
114 |
+
|
115 |
+
<details>
|
116 |
+
<summary> Click to expand </summary>
|
117 |
+
|
118 |
+
```python
|
119 |
+
# pip install accelerate
|
120 |
+
import torch
|
121 |
+
import requests
|
122 |
+
from PIL import Image
|
123 |
+
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
124 |
+
|
125 |
+
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
126 |
+
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
|
127 |
+
|
128 |
+
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
129 |
+
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
130 |
+
|
131 |
+
question = "how many dogs are in the picture?"
|
132 |
+
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
|
133 |
+
|
134 |
+
out = model.generate(**inputs)
|
135 |
+
print(processor.decode(out[0], skip_special_tokens=True))
|
136 |
+
```
|
137 |
+
</details>
|
138 |
+
|
139 |
+
##### In 8-bit precision (`int8`)
|
140 |
+
|
141 |
+
<details>
|
142 |
+
<summary> Click to expand </summary>
|
143 |
+
|
144 |
+
```python
|
145 |
+
# pip install accelerate bitsandbytes
|
146 |
+
import torch
|
147 |
+
import requests
|
148 |
+
from PIL import Image
|
149 |
+
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
150 |
+
|
151 |
+
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
152 |
+
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
|
153 |
+
|
154 |
+
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
155 |
+
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
156 |
+
|
157 |
+
question = "how many dogs are in the picture?"
|
158 |
+
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
|
159 |
+
|
160 |
+
out = model.generate(**inputs)
|
161 |
+
print(processor.decode(out[0], skip_special_tokens=True))
|
162 |
+
```
|
163 |
+
</details>
|
config.json
ADDED
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_commit_hash": null,
|
3 |
+
"architectures": [
|
4 |
+
"Blip2ForConditionalGeneration"
|
5 |
+
],
|
6 |
+
"initializer_factor": 1.0,
|
7 |
+
"initializer_range": 0.02,
|
8 |
+
"model_type": "blip-2",
|
9 |
+
"num_query_tokens": 32,
|
10 |
+
"qformer_config": {
|
11 |
+
"_name_or_path": "",
|
12 |
+
"add_cross_attention": false,
|
13 |
+
"architectures": null,
|
14 |
+
"attention_probs_dropout_prob": 0.1,
|
15 |
+
"bad_words_ids": null,
|
16 |
+
"begin_suppress_tokens": null,
|
17 |
+
"bos_token_id": null,
|
18 |
+
"chunk_size_feed_forward": 0,
|
19 |
+
"classifier_dropout": null,
|
20 |
+
"cross_attention_frequency": 2,
|
21 |
+
"cross_attention_hidden_size": null,
|
22 |
+
"decoder_start_token_id": null,
|
23 |
+
"diversity_penalty": 0.0,
|
24 |
+
"do_sample": false,
|
25 |
+
"early_stopping": false,
|
26 |
+
"encoder_hidden_size": 1408,
|
27 |
+
"encoder_no_repeat_ngram_size": 0,
|
28 |
+
"eos_token_id": null,
|
29 |
+
"exponential_decay_length_penalty": null,
|
30 |
+
"finetuning_task": null,
|
31 |
+
"forced_bos_token_id": null,
|
32 |
+
"forced_eos_token_id": null,
|
33 |
+
"hidden_act": "gelu",
|
34 |
+
"hidden_dropout_prob": 0.1,
|
35 |
+
"hidden_size": 768,
|
36 |
+
"id2label": {
|
37 |
+
"0": "LABEL_0",
|
38 |
+
"1": "LABEL_1"
|
39 |
+
},
|
40 |
+
"initializer_range": 0.02,
|
41 |
+
"intermediate_size": 3072,
|
42 |
+
"is_decoder": false,
|
43 |
+
"is_encoder_decoder": false,
|
44 |
+
"label2id": {
|
45 |
+
"LABEL_0": 0,
|
46 |
+
"LABEL_1": 1
|
47 |
+
},
|
48 |
+
"layer_norm_eps": 1e-12,
|
49 |
+
"length_penalty": 1.0,
|
50 |
+
"max_length": 20,
|
51 |
+
"max_position_embeddings": 512,
|
52 |
+
"min_length": 0,
|
53 |
+
"model_type": "blip_2_qformer",
|
54 |
+
"no_repeat_ngram_size": 0,
|
55 |
+
"num_attention_heads": 12,
|
56 |
+
"num_beam_groups": 1,
|
57 |
+
"num_beams": 1,
|
58 |
+
"num_hidden_layers": 12,
|
59 |
+
"num_return_sequences": 1,
|
60 |
+
"output_attentions": false,
|
61 |
+
"output_hidden_states": false,
|
62 |
+
"output_scores": false,
|
63 |
+
"pad_token_id": 0,
|
64 |
+
"position_embedding_type": "absolute",
|
65 |
+
"prefix": null,
|
66 |
+
"problem_type": null,
|
67 |
+
"pruned_heads": {},
|
68 |
+
"remove_invalid_values": false,
|
69 |
+
"repetition_penalty": 1.0,
|
70 |
+
"return_dict": true,
|
71 |
+
"return_dict_in_generate": false,
|
72 |
+
"sep_token_id": null,
|
73 |
+
"suppress_tokens": null,
|
74 |
+
"task_specific_params": null,
|
75 |
+
"temperature": 1.0,
|
76 |
+
"tf_legacy_loss": false,
|
77 |
+
"tie_encoder_decoder": false,
|
78 |
+
"tie_word_embeddings": true,
|
79 |
+
"tokenizer_class": null,
|
80 |
+
"top_k": 50,
|
81 |
+
"top_p": 1.0,
|
82 |
+
"torch_dtype": null,
|
83 |
+
"torchscript": false,
|
84 |
+
"transformers_version": "4.27.0.dev0",
|
85 |
+
"typical_p": 1.0,
|
86 |
+
"use_bfloat16": false,
|
87 |
+
"vocab_size": 30522
|
88 |
+
},
|
89 |
+
"text_config": {
|
90 |
+
"_name_or_path": "facebook/opt-2.7b",
|
91 |
+
"_remove_final_layer_norm": false,
|
92 |
+
"activation_dropout": 0.0,
|
93 |
+
"activation_function": "relu",
|
94 |
+
"add_cross_attention": false,
|
95 |
+
"architectures": [
|
96 |
+
"OPTForCausalLM"
|
97 |
+
],
|
98 |
+
"attention_dropout": 0.0,
|
99 |
+
"bad_words_ids": null,
|
100 |
+
"begin_suppress_tokens": null,
|
101 |
+
"bos_token_id": 2,
|
102 |
+
"chunk_size_feed_forward": 0,
|
103 |
+
"cross_attention_hidden_size": null,
|
104 |
+
"decoder_start_token_id": null,
|
105 |
+
"diversity_penalty": 0.0,
|
106 |
+
"do_layer_norm_before": true,
|
107 |
+
"do_sample": false,
|
108 |
+
"dropout": 0.1,
|
109 |
+
"early_stopping": false,
|
110 |
+
"enable_bias": true,
|
111 |
+
"encoder_no_repeat_ngram_size": 0,
|
112 |
+
"eos_token_id": 50118,
|
113 |
+
"exponential_decay_length_penalty": null,
|
114 |
+
"ffn_dim": 10240,
|
115 |
+
"finetuning_task": null,
|
116 |
+
"forced_bos_token_id": null,
|
117 |
+
"forced_eos_token_id": null,
|
118 |
+
"hidden_size": 2560,
|
119 |
+
"id2label": {
|
120 |
+
"0": "LABEL_0",
|
121 |
+
"1": "LABEL_1"
|
122 |
+
},
|
123 |
+
"init_std": 0.02,
|
124 |
+
"is_decoder": false,
|
125 |
+
"is_encoder_decoder": false,
|
126 |
+
"label2id": {
|
127 |
+
"LABEL_0": 0,
|
128 |
+
"LABEL_1": 1
|
129 |
+
},
|
130 |
+
"layer_norm_elementwise_affine": true,
|
131 |
+
"layerdrop": 0.0,
|
132 |
+
"length_penalty": 1.0,
|
133 |
+
"max_length": 20,
|
134 |
+
"max_position_embeddings": 2048,
|
135 |
+
"min_length": 0,
|
136 |
+
"model_type": "opt",
|
137 |
+
"no_repeat_ngram_size": 0,
|
138 |
+
"num_attention_heads": 32,
|
139 |
+
"num_beam_groups": 1,
|
140 |
+
"num_beams": 1,
|
141 |
+
"num_hidden_layers": 32,
|
142 |
+
"num_return_sequences": 1,
|
143 |
+
"output_attentions": false,
|
144 |
+
"output_hidden_states": false,
|
145 |
+
"output_scores": false,
|
146 |
+
"pad_token_id": 1,
|
147 |
+
"prefix": "</s>",
|
148 |
+
"problem_type": null,
|
149 |
+
"pruned_heads": {},
|
150 |
+
"remove_invalid_values": false,
|
151 |
+
"repetition_penalty": 1.0,
|
152 |
+
"return_dict": true,
|
153 |
+
"return_dict_in_generate": false,
|
154 |
+
"sep_token_id": null,
|
155 |
+
"suppress_tokens": null,
|
156 |
+
"task_specific_params": null,
|
157 |
+
"temperature": 1.0,
|
158 |
+
"tf_legacy_loss": false,
|
159 |
+
"tie_encoder_decoder": false,
|
160 |
+
"tie_word_embeddings": true,
|
161 |
+
"tokenizer_class": null,
|
162 |
+
"top_k": 50,
|
163 |
+
"top_p": 1.0,
|
164 |
+
"torch_dtype": "float16",
|
165 |
+
"torchscript": false,
|
166 |
+
"transformers_version": "4.27.0.dev0",
|
167 |
+
"typical_p": 1.0,
|
168 |
+
"use_bfloat16": false,
|
169 |
+
"use_cache": true,
|
170 |
+
"vocab_size": 50272,
|
171 |
+
"word_embed_proj_dim": 2560
|
172 |
+
},
|
173 |
+
"torch_dtype": "float32",
|
174 |
+
"transformers_version": null,
|
175 |
+
"use_decoder_only_language_model": true,
|
176 |
+
"vision_config": {
|
177 |
+
"_name_or_path": "",
|
178 |
+
"add_cross_attention": false,
|
179 |
+
"architectures": null,
|
180 |
+
"attention_dropout": 0.0,
|
181 |
+
"bad_words_ids": null,
|
182 |
+
"begin_suppress_tokens": null,
|
183 |
+
"bos_token_id": null,
|
184 |
+
"chunk_size_feed_forward": 0,
|
185 |
+
"cross_attention_hidden_size": null,
|
186 |
+
"decoder_start_token_id": null,
|
187 |
+
"diversity_penalty": 0.0,
|
188 |
+
"do_sample": false,
|
189 |
+
"dropout": 0.0,
|
190 |
+
"early_stopping": false,
|
191 |
+
"encoder_no_repeat_ngram_size": 0,
|
192 |
+
"eos_token_id": null,
|
193 |
+
"exponential_decay_length_penalty": null,
|
194 |
+
"finetuning_task": null,
|
195 |
+
"forced_bos_token_id": null,
|
196 |
+
"forced_eos_token_id": null,
|
197 |
+
"hidden_act": "gelu",
|
198 |
+
"hidden_size": 1408,
|
199 |
+
"id2label": {
|
200 |
+
"0": "LABEL_0",
|
201 |
+
"1": "LABEL_1"
|
202 |
+
},
|
203 |
+
"image_size": 224,
|
204 |
+
"initializer_factor": 1.0,
|
205 |
+
"initializer_range": 1e-10,
|
206 |
+
"intermediate_size": 6144,
|
207 |
+
"is_decoder": false,
|
208 |
+
"is_encoder_decoder": false,
|
209 |
+
"label2id": {
|
210 |
+
"LABEL_0": 0,
|
211 |
+
"LABEL_1": 1
|
212 |
+
},
|
213 |
+
"layer_norm_eps": 1e-05,
|
214 |
+
"length_penalty": 1.0,
|
215 |
+
"max_length": 20,
|
216 |
+
"min_length": 0,
|
217 |
+
"model_type": "blip_2_vision_model",
|
218 |
+
"no_repeat_ngram_size": 0,
|
219 |
+
"num_attention_heads": 16,
|
220 |
+
"num_beam_groups": 1,
|
221 |
+
"num_beams": 1,
|
222 |
+
"num_channels": 3,
|
223 |
+
"num_hidden_layers": 39,
|
224 |
+
"num_return_sequences": 1,
|
225 |
+
"output_attentions": false,
|
226 |
+
"output_hidden_states": false,
|
227 |
+
"output_scores": false,
|
228 |
+
"pad_token_id": null,
|
229 |
+
"patch_size": 14,
|
230 |
+
"prefix": null,
|
231 |
+
"problem_type": null,
|
232 |
+
"projection_dim": 512,
|
233 |
+
"pruned_heads": {},
|
234 |
+
"qkv_bias": true,
|
235 |
+
"remove_invalid_values": false,
|
236 |
+
"repetition_penalty": 1.0,
|
237 |
+
"return_dict": true,
|
238 |
+
"return_dict_in_generate": false,
|
239 |
+
"sep_token_id": null,
|
240 |
+
"suppress_tokens": null,
|
241 |
+
"task_specific_params": null,
|
242 |
+
"temperature": 1.0,
|
243 |
+
"tf_legacy_loss": false,
|
244 |
+
"tie_encoder_decoder": false,
|
245 |
+
"tie_word_embeddings": true,
|
246 |
+
"tokenizer_class": null,
|
247 |
+
"top_k": 50,
|
248 |
+
"top_p": 1.0,
|
249 |
+
"torch_dtype": null,
|
250 |
+
"torchscript": false,
|
251 |
+
"transformers_version": "4.27.0.dev0",
|
252 |
+
"typical_p": 1.0,
|
253 |
+
"use_bfloat16": false
|
254 |
+
}
|
255 |
+
}
|
handler.py
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Dict, List, Any
|
2 |
+
|
3 |
+
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
4 |
+
|
5 |
+
from PIL import Image
|
6 |
+
from io import BytesIO
|
7 |
+
import torch
|
8 |
+
import os
|
9 |
+
|
10 |
+
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
11 |
+
|
12 |
+
class EndpointHandler:
|
13 |
+
def __init__(self, path=""):
|
14 |
+
# load the optimized model
|
15 |
+
|
16 |
+
self.processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
17 |
+
self.model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
|
18 |
+
self.model.eval()
|
19 |
+
self.model = self.model.to("cuda")
|
20 |
+
|
21 |
+
|
22 |
+
def __call__(self, data: Any) -> Dict[str, Any]:
|
23 |
+
"""
|
24 |
+
Args:
|
25 |
+
data (:obj:):
|
26 |
+
includes the input data and the parameters for the inference.
|
27 |
+
Return:
|
28 |
+
A :obj:`dict`:. The object returned should be a dict of one list like {"captions": ["A hugging face at the office"]} containing :
|
29 |
+
- "caption": A string corresponding to the generated caption.
|
30 |
+
"""
|
31 |
+
inputs = data.pop("inputs", data)
|
32 |
+
parameters = data.pop("parameters", {})
|
33 |
+
|
34 |
+
raw_images = inputs
|
35 |
+
|
36 |
+
processed_image = self.processor(images=raw_images, return_tensors="pt").to(device)
|
37 |
+
processed_image["pixel_values"] = processed_image["pixel_values"].to(device)
|
38 |
+
processed_image = {**processed_image, **parameters}
|
39 |
+
|
40 |
+
with torch.no_grad():
|
41 |
+
out = self.model.generate(
|
42 |
+
**processed_image
|
43 |
+
)
|
44 |
+
captions = self.processor.batch_decode(out, skip_special_tokens=True)
|
45 |
+
# postprocess the prediction
|
46 |
+
return {"captions": captions}
|
preprocessor_config.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_convert_rgb": true,
|
3 |
+
"do_normalize": true,
|
4 |
+
"do_rescale": true,
|
5 |
+
"do_resize": true,
|
6 |
+
"image_mean": [
|
7 |
+
0.48145466,
|
8 |
+
0.4578275,
|
9 |
+
0.40821073
|
10 |
+
],
|
11 |
+
"image_processor_type": "BlipImageProcessor",
|
12 |
+
"image_std": [
|
13 |
+
0.26862954,
|
14 |
+
0.26130258,
|
15 |
+
0.27577711
|
16 |
+
],
|
17 |
+
"processor_class": "Blip2Processor",
|
18 |
+
"resample": 3,
|
19 |
+
"rescale_factor": 0.00392156862745098,
|
20 |
+
"size": {
|
21 |
+
"height": 224,
|
22 |
+
"width": 224
|
23 |
+
}
|
24 |
+
}
|