Is the pretrained model cased or uncased?
#6
by
nediaz2
- opened
Seems like the tokenizer is not doing lower case on the input string, so I assume the model is cased?
The vocab is cased indeed. LayoutLMv3's tokenizer is based on BPE (byte-pair encoding), similar to RoBERTa.