Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
We have deactivated the automatic preview for this dataset because it contains hate speech. If you want to see the preview, you can continue.
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Card for HatemojiCheck
Content Warning
This datasets contains examples of hateful language.
Dataset Description and Details
- Repository: https://github.com/HannahKirk/Hatemoji
- Paper: https://arxiv.org/abs/2108.05921
- Point of Contact: [email protected]
Dataset Summary
HatemojiCheck is a test suite of 3,930 test cases covering seven functionalities of emoji-based hate and six identities. HatemojiCheck contains the text for each test case and its gold-standard label from majority agreement of three annotators. We provide labels by target of hate. HatemojiCheck can be used to evaluate the robustness of hate speech classifiers to constructions of emoji-based hate.
Supported Tasks
Hate Speech Detection
Languages
English
Dataset Structure
Data Instances
3,930 test cases
Data Fields
case_id: The unique ID of the test case (assigned to each of the 3,930 cases generated)
templ_id: The unique ID of the template (original=.0, identity perturbation=.1, polarity perturbation=.2, emoji perturbation = .3) from which the test case was generated
test_grp_id: The ID of the set of templates (original, identity perturbation, polarity perturbation, no emoji perturbation) from which the test case was generated.
text: The text of the test case.
target: Where applicable, the protected group targeted or referenced by the test case. We cover six protected groups in the test suite: women, trans people, gay people, black people, disabled people and Muslims.
functionality: The shorthand for the functionality tested by the test case.
set: Whether the test case is an original statement, a identity perturbation, a polarity perturbation or a no emoji perturbation.
label_gold: The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
unrealistic_flags: The number of annotators (/3) who flagged the test case as unrealistic.
included_in_test_suite: Indicator for whether test case is included in final HatemojiCheck test suite. All 3,930 test cases are included.
Data Splits
All of HatemojiCheck is designated for testing models so only test is provided.
Dataset Creation
Curation Rationale
The purpose of HatemojiCheck is to evaluate the performance of black-box models against varied constructions of emoji-based hate. To construct HatemojiCheck, we hand-crafted 3,930 short form English-language texts using a template-based method for group identities and slurs. Each test case exemplifies one functionality and is associated with a binary gold standard label hateful versus not hateful. All 3,930 cases were labeled by a trained team of three annotators, who could also flag examples that were unrealistic. Any test cases with multiple disagreements or flags were replaced with alternative templates and re-issued for annotation to improve the quality of examples in the final set of test cases.
Source Data
Initial Data Collection and Normalization
Based on the literature, we define a list of potentially hateful emoji and words, and use Twitter's Streaming API to search for the Cartesian products of emoji--emoji and emoji--word pairs over a two week period. To identify different forms of emoji-based hate, we apply a grounded theory approach on a sample of 3,295 tweets, splitting out distinctive categories, and recursively selecting sub-categories until all key parts of the data are captured and the framework is `saturated'.
Who are the source language producers?
All test cases were hand-crafted by the lead author, who is a native English-speaking researcher at a UK university with extensive subject matter expertise in online harms. The test cases are in English. This choice was motivated by the researchers' and annotators' expertise, and to maximize HatemojiCheck's applicability to previous hate speech detection studies, which are predominantly conducted on English-language data. We discuss the limitations of restricting HatemojiCheck to one language and suggest that future work should prioritize expanding the test suite to other languages.
Annotations
Annotation process
To validate the gold-standard labels assigned to each test case, we recruited three annotators with prior experience on hate speech projects. Annotators were given extensive guidelines, test tasks and training sessions, which included examining real-world examples of emoji-based hate from Twitter. We followed guidance for protecting annotator well-being. There were two iterative rounds of annotation. In the first round, each annotator labeled all 3,930 test cases as hateful or non-hateful, and had the option to flag unrealistic entries. Test cases with any disagreement or unrealistic flags were reviewed by the study authors (n=289). One-on-one interviews were conducted with annotators to identify dataset issues versus annotator error. From 289 test cases, 119 were identified as ambiguous or unrealistic, replaced with alternatives and re-issued to annotators for labeling. No further issues were raised. We measured inter-annotator agreement using Randolph's Kappa, obtaining a value of 0.85 for the final set of test cases, which indicates "almost perfect agreement".
Who are the annotators?
We recruited a team of three annotators who worked for two weeks in May 2021 and were paid £16. All annotators were female and between 30--39 years old. One had an undergraduate degree, one a taught graduate degree and one a post-graduate research degree. There were three nationalities: Argentinian, British and Iraqi, two ethnicities: White and Arab, and three religious affiliations: Catholic, Muslim and None. One annotator was a native English speaker and the others were non-native but fluent. All annotators used emoji and social media more than once per day. All annotators had seen others targeted by abuse online, and one had been targeted personally.
Personal and Sensitive Information
HatemojiCheck contains synthetic statements so has no personal information. It does however contains harmful examples of emoji-based hate which could be disturbing or damaging to view.
Considerations for Using the Data
Social Impact of Dataset
HatemojiCheck contains challenging emoji examples on which commercial solutions and state-of-the-art transformer models have been proven to fail. Malicious actors could take inspiration for bypassing current detection systems on internet platforms, or in principal train a generative hate speech model. However, it also helps to evaluate model's weaknesses to emoji-based hate, so can be used to mitigate the harm to victims before a model is deployed.
Discussion of Biases
HatemojiCheck only contains test cases against 6 identities: woman, trans people, gay people, disabled people, Black people and Muslims. It thus is biased towards evaluating hate directed at these targets. Additionally, HatemojiCheck was motivated by an empirical study of English-language tweets. The usage of emoji varies significantly across culture, country and demographic so there may be biases towards Western, English-language use of emoji.
Other Known Limitations
While inspired by real-world instances of emoji-based hate, HatemojiCheck contains synthetic, hand-crafted test cases. These test cases are designed to be a "minimum performance standard" against which to hold models accountable. However, because the test cases are designed to have one "clear, gold-standard label" they may be easier to predict than more nuanced, complex and real-world instances of emoji-based hate.
Additional Information
Dataset Curators
The dataset was created by the lead author (Hannah Rose Kirk), then validated by the other authors and three annotators.
Licensing Information
Creative Commons Attribution 4.0 International Public License. For full detail see: https://github.com/HannahKirk/Hatemoji/blob/main/LICENSE
Citation Information
If you use this dataset, please cite our paper: Kirk, H. R., Vidgen, B., Röttger, P., Thrush, T., & Hale, S. A. (2021). Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.
@article{kirk2021hatemoji,
title={Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate},
author={Kirk, Hannah Rose and Vidgen, Bertram and R{\"o}ttger, Paul and Thrush, Tristan and Hale, Scott A},
journal={arXiv preprint arXiv:2108.05921},
year={2021}
}
Contributions
Thanks to @HannahKirk for adding this dataset.
- Downloads last month
- 53