Edit model card

Frederik Gaasdal Jensen • Henry Stoll • Sippo Rossi • Raghava Rao Mukkamala

UNHCR Hate Speech Detection Model

This is a transformer model that can detect hate and offensive speech for English text. The primary use-case of this model is to detect hate speech targeted at refugees. The model is based on roberta-uncased and was fine-tuned on 12 abusive language datasets.

The model has been developed as a collaboration between UNHCR, the UN Refugee Agency, and Copenhagen Business School.

  • F1-score on test set (10% of the overall dataset): 81%
  • Hatecheck score: 90.3%

Labels

{
  0: "Normal",
  1: "Offensive",
  2: "Hate speech",
}
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.