Joe99 commited on
Commit
ef0b024
1 Parent(s): ffa4196
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - visual-question-answering
4
+ license: apache-2.0
5
+ widget:
6
+ - text: What's the animal doing?
7
+ src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
8
+ - text: What is on top of the building?
9
+ src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
10
+ language:
11
+ - en
12
+ metrics:
13
+ - accuracy
14
+ library_name: transformers
15
+ ---
16
+
17
+ # Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
18
+
19
+ Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
20
+ Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
21
+
22
+ Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
23
+
24
+ ## Intended uses & limitations
25
+
26
+ You can use the raw model for visual question answering.
27
+
28
+ ### How to use
29
+
30
+ Here is how to use this model in PyTorch:
31
+
32
+ ```python
33
+ from transformers import ViltProcessor, ViltForQuestionAnswering
34
+ import requests
35
+ from PIL import Image
36
+
37
+ # prepare image + question
38
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
39
+ image = Image.open(requests.get(url, stream=True).raw)
40
+ text = "How many cats are there?"
41
+
42
+ processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
43
+ model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
44
+
45
+ # prepare inputs
46
+ encoding = processor(image, text, return_tensors="pt")
47
+
48
+ # forward pass
49
+ outputs = model(**encoding)
50
+ logits = outputs.logits
51
+ idx = logits.argmax(-1).item()
52
+ print("Predicted answer:", model.config.id2label[idx])
53
+ ```
54
+
55
+ ## Training data
56
+
57
+ (to do)
58
+
59
+ ## Training procedure
60
+
61
+ ### Preprocessing
62
+
63
+ (to do)
64
+
65
+ ### Pretraining
66
+
67
+ (to do)
68
+
69
+ ## Evaluation results
70
+
71
+ (to do)
72
+
73
+ ### BibTeX entry and citation info
74
+
75
+ ```bibtex
76
+ @misc{kim2021vilt,
77
+ title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
78
+ author={Wonjae Kim and Bokyung Son and Ildoo Kim},
79
+ year={2021},
80
+ eprint={2102.03334},
81
+ archivePrefix={arXiv},
82
+ primaryClass={stat.ML}
83
+ }
84
+ ```