Special tokens: purpose

#59
by tdeboissiere - opened

Hello !

I was curious about the special tokens (e.g. < od >, < /od >, < ocr >, < /ocr >]) in the Florence2Processor

These tokens don't seem to be used anywhere, so what is their purpose ?
Related: how was Florence-2 initially trained, say, for object detection ? (Were the inputs to the model the image + a text prompt such as "Locate the objects with category name in the image." + the category + the actual location of the objects in the image ?

Those special tokens are for Object detection. They can be used to separate class names in the input prompt.

Wouldn't it make more sense that special tokens like < od > and < /od > would be used to indicate the start and and of an object detection task, and similarly of < ocr > < /ocr > and so on ?

So at training time, a data point used for an object detection would look like this

  • image tokens, followed by
  • < od > dog < loc 100 > < loc 200 > < loc 200 > < loc 300 > cat < loc 200 > < loc 400 > < loc 400 > < loc 600 > < od >

while a data point for captioning would look like this

  • image tokens, followed by
  • < cap > my cool caption < /cap >

And if that's what was done at training time, why doesn't processing_florence2.py automatically prepends those special tokens at inference time ?

Same doubt. I think they aren't mapping "tags" like to special token like , rather it's like, model knows it should perform object detection from natural text prompt corresponding to tag.

Sign up or log in to comment