nielsr HF staff srinivasgs commited on
Commit
5d62ad2
1 Parent(s): 3686e65

updated the How to use section so that the code actually does what the live demo does (#4)

Browse files

- updated the How to use section so that the code actually does what the live demo does (199695c904a49dc957f737821c2ada065c8d4517)
- swtiched to YolosImageProcessor (bbd712b25cecec17f9a487f38096f55db7285a9f)


Co-authored-by: Srinivas Gorur-Shandilya <[email protected]>

Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -35,22 +35,34 @@ You can use the raw model for object detection. See the [model hub](https://hugg
35
  Here is how to use this model:
36
 
37
  ```python
38
- from transformers import YolosFeatureExtractor, YolosForObjectDetection
39
  from PIL import Image
 
40
  import requests
41
 
42
- url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
43
  image = Image.open(requests.get(url, stream=True).raw)
44
 
45
- feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
46
  model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
 
47
 
48
- inputs = feature_extractor(images=image, return_tensors="pt")
49
  outputs = model(**inputs)
50
 
51
  # model predicts bounding boxes and corresponding COCO classes
52
  logits = outputs.logits
53
  bboxes = outputs.pred_boxes
 
 
 
 
 
 
 
 
 
 
 
54
  ```
55
 
56
  Currently, both the feature extractor and model support PyTorch.
 
35
  Here is how to use this model:
36
 
37
  ```python
38
+ from transformers import YolosImageProcessor, YolosForObjectDetection
39
  from PIL import Image
40
+ import torch
41
  import requests
42
 
43
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
44
  image = Image.open(requests.get(url, stream=True).raw)
45
 
 
46
  model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
47
+ image_processor = YolosImageProcessor.from_pretrained("hustvl/yolos-tiny")
48
 
49
+ inputs = image_processor(images=image, return_tensors="pt")
50
  outputs = model(**inputs)
51
 
52
  # model predicts bounding boxes and corresponding COCO classes
53
  logits = outputs.logits
54
  bboxes = outputs.pred_boxes
55
+
56
+
57
+ # print results
58
+ target_sizes = torch.tensor([image.size[::-1]])
59
+ results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0]
60
+ for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
61
+ box = [round(i, 2) for i in box.tolist()]
62
+ print(
63
+ f"Detected {model.config.id2label[label.item()]} with confidence "
64
+ f"{round(score.item(), 3)} at location {box}"
65
+ )
66
  ```
67
 
68
  Currently, both the feature extractor and model support PyTorch.