Transformers
PyTorch
clip
Inference Endpoints
visheratin commited on
Commit
35f7174
1 Parent(s): 4a0f492

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - visheratin/laion-coco-nllb
5
+ ---
6
+
7
+ The code to run the model:
8
+
9
+ ```
10
+ from transformers import AutoTokenizer, CLIPProcessor
11
+ import requests
12
+ from PIL import Image
13
+
14
+ from modeling_nllb_clip import NLLBCLIPModel # local file from the repo
15
+
16
+ processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
17
+ processor = processor.image_processor
18
+ tokenizer = AutoTokenizer.from_pretrained(
19
+ "facebook/nllb-200-distilled-600M"
20
+ )
21
+ image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
22
+ image = Image.open(requests.get(image_path, stream=True).raw)
23
+ image_inputs = processor(images=image, return_tensors="pt")
24
+ text_inputs = tokenizer(
25
+ ["cat", "dog", "butterfly"],
26
+ padding="longest",
27
+ return_tensors="pt",
28
+ )
29
+
30
+ hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")
31
+
32
+ outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)
33
+ ```