Multilabel binary classification
I am wondering how to best model the scenario where I want a binary classifier with the positive class having multiple labels (e.g, this article is about sports OR politics OR science). Should I use one meta-label (“sports or politics or science”) or should I use three separate labels and sum up the probabilities?
Sounds like this is what you're looking for? and then define a threshold for the probabilities. you could also put everything in one hypothesis/label, which would make inference faster, but it could perform worse and you get less fine-grained information on which of the labels applies.
#!pip install transformers[sentencepiece]
from transformers import pipeline
text = "Angela Merkel is a politician in Germany and leader of the CDU"
hypothesis_template = "This text is about {}"
classes_verbalized = ["sports", "politics", "science"]
zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-large-zeroshot-v2.0")
output = zeroshot_classifier(text, classes_verbalized, hypothesis_template=hypothesis_template, multi_label=True)
print(output)
see also this response: https://huggingface.co/MoritzLaurer/bge-m3-zeroshot-v2.0/discussions/2
Seems (based on some internal experiments) like putting everything into one meta-label degrades performance significantly unfortunately. Will keep them separated with multi_label=True
. Thanks!