Datasets:
Label for id 517 do not match with original imagenet-1k dataset
In the original imagenet-1k dataset, the label 517 is "crane"
just as for the id 134. In this dataset, I guess to avoid duplicates, the label 517 is renamed to crane2
.
Meanwhile, models hosted on the hub make use of the original labels, and both are named "crane"
. This results in the label2id
being wrong, with the map "crane": 517
. See for example https://huggingface.co/microsoft/resnet-50/blob/main/config.json .
I see two solutions:
- edit this dataset feature names to use
"crane"
instead of"crane2"
. This is the simpliest solution and faithful to the original dataset, but this is not perfect either due to the point above. - do massive PRs on image classification models tagged with
imagenet-1k
to modify theirlabel2id
andid2label
. In this case, we should warn on the dataset card that we use a custom label different than the original.
We could also do nothing, and just warn about the discrepancy, but I think it is not ideal.
I don't think "unifying" these labels to be aligned with a faulty training data preparation is a good idea, as these labels represent different objects (crane
is a bird and crane2
is a mechanical crane) and should be "fixed" in the original dataset's label mapping as explained in https://www.adeveloperdiary.com/data-science/computer-vision/how-to-prepare-imagenet-dataset-for-image-classification/.
Perhaps using the synsets for the label
names would make more sense (as Tensorflow Datasets does and suggested by
@rwightman
), as the human-readable version has collisions. However, this dataset (script) is quite old, so it is probably best not to introduce this breaking change.
The current use of label2id and id2label scheme for image datasets in general is problematic. The only 'labels' that should be used for such mappings must be unique and unambigious, the language specific (ie english in this case) friendly names are not that. For imagenet, synsets should be used here. For ImageNet-22k the issue is much worse if you look at the number of conflicts, you simply can't have a meaningful label2id without changing a lot of names.
Three levels of ids/labels/descriptions cover all use cases I'm aware of:
- model classifier indicies (id in this) - contiguous, 0 or 1 based integers from the classifier in the model (typical only 1 based if there is a 0 'background/empty' class)
- label or class ids - unique and unambigous alphanumeric class labels (could be another set of integers as you'd see in say COCO annotation ids, that don't necessarily align with the classifier)
- class descriptions - natural language friendly names and/or longer descriptions of each class
There is a bidirectional mapping between model class indices <-> labels that is specific to each model trained on a given dataset.
There is a unidirectional mapping from labels -> friendly or detailed names/descriptions (could be localized to multiple languages) that is common for all models trained on a given dataset.
For imagenet, synsets are the labels. For OpenImages, MID (from freebase / google knowledge base are the labels. For COCO, integers are technically the labels (they are non contiguous)...
Note the differing naming across organizations and uses makes the above a extremely confusing, see below for just a few variations and note the overlap based on use. As an org we should settle on a consistent use.
- classifier indices, class indices, class ids
- labels, class labels, label names, class ids, class names (sigh)
- class descriptions, names