YOLOS (small-sized) model Finetuned For Seal Detection Task
YOLOS model based on hustvl/yolos-small
and fine-tuned on Our Seal Image Dataset.
Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss.
How to use
Here is how to use this model:
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
image = Image.open("xxxxxxxxxxxxx")
feature_extractor = YolosFeatureExtractor.from_pretrained('fantast/yolos-small-finetuned-for-seal')
model = YolosForObjectDetection.from_pretrained('fantast/yolos-small-finetuned-for-seal')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
model predicts bounding boxes
logits = outputs.logits
bboxes = outputs.pred_boxes
Currently, both the feature extractor and model support PyTorch.
Training data
The YOLOS model based on hustvl/yolos-small
and fine-tuned on Our Own Seal Image Dataset, a dataset consisting of 118k/5k annotated images for training/validation respectively.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}