Datasets:
QCRI
/

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
Firoj commited on
Commit
31bd03e
1 Parent(s): 069e987

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -2
README.md CHANGED
@@ -280,6 +280,159 @@ configs:
280
  - split: multiclass_test
281
  path: data/multilang_multiclass_dev-*
282
  ---
283
- # Dataset Card for "COVID-19-disinformation"
284
 
285
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
280
  - split: multiclass_test
281
  path: data/multilang_multiclass_dev-*
282
  ---
 
283
 
284
+ # COVID-19 Infodemic Multilingual Dataset
285
+
286
+ This repository contains a multilingual dataset related to the COVID-19 infodemic, annotated with fine-grained labels. The dataset is curated to address questions of interest to journalists, fact-checkers, social media platforms, policymakers, and the general public. The dataset includes tweets in Arabic, Bulgarian, Dutch, and English, focusing on both binary (misinformation detection) and multiclass classification (different types of infodemic content).
287
+
288
+ ## Table of Contents:
289
+ - [Dataset Overview](#dataset-overview)
290
+ - [Languages and Splits](#languages-and-splits)
291
+ - [File Formats](#file-formats)
292
+ - [Annotations](#annotations)
293
+ - [Dataset Examples](#dataset-examples)
294
+ - [Data Statistics](#data-statistics)
295
+ - [License](#license)
296
+ - [Citation](#citation)
297
+
298
+ ## Dataset Overview
299
+
300
+ The dataset consists of tweets related to COVID-19, categorized under two tasks:
301
+
302
+ 1. **Binary Classification**:
303
+ Detecting whether a tweet contains misinformation.
304
+
305
+ 2. **Multiclass Classification**:
306
+ Classifying the tweet into specific infodemic categories such as conspiracy theories, harmful content, or false cures.
307
+
308
+ ### Languages and Splits
309
+
310
+ The dataset includes the following languages, each with train, development (dev), and test splits:
311
+
312
+ - Arabic
313
+ - Bulgarian
314
+ - Dutch
315
+ - English
316
+
317
+ In addition to individual language datasets, a **multilang** directory contains a multilingual dataset where tweets from all the above languages are combined in the binary and multiclass formats.
318
+
319
+ ### File Formats
320
+
321
+ The dataset is provided in TSV (Tab-Separated Values) format. Each file contains tweet IDs, labels for seven questions (Q1-Q7), and binary/multiclass annotations. The actual tweet text and associated metadata are not included for privacy reasons.
322
+
323
+ ### Directory Structure
324
+
325
+ - **Readme.md**: This file
326
+ - **arabic/**, **bulgarian/**, **dutch/**, **english/**: Directories containing language-specific datasets for both binary and multiclass classification.
327
+ - **multilang/**: A directory containing the multilingual version of the dataset.
328
+
329
+ Each language and the multilingual directory include three sets:
330
+ - `train`
331
+ - `dev`
332
+ - `test`
333
+
334
+ The `*_binary_*` files correspond to binary classification, while the `*_multiclass_*` files correspond to multiclass classification.
335
+
336
+ ## Annotations
337
+
338
+ The dataset contains labels for the following seven questions (Q1-Q7), each related to different aspects of the tweets:
339
+
340
+ 1. **Is the tweet understandable?**
341
+ - Labels: Yes, No, Not sure
342
+ - This question evaluates whether the tweet's content is understandable.
343
+
344
+ 2. **Does the tweet contain false information?**
345
+ - Labels: Definitely no, Probably no, Not sure, Probably yes, Definitely yes
346
+ - This question assesses the likelihood of false information in the tweet.
347
+
348
+ 3. **Will the tweet’s claim be of interest to the general public?**
349
+ - Labels: Definitely no, Probably no, Not sure, Probably yes, Definitely yes
350
+ - Evaluates whether the tweet’s claim is relevant or interesting to the public.
351
+
352
+ 4. **Is the tweet harmful?**
353
+ - Labels: Definitely no, Probably no, Not sure, Probably yes, Definitely yes
354
+ - Assesses if the tweet might cause harm to individuals, society, or businesses.
355
+
356
+ 5. **Should a professional fact-checker verify the claim?**
357
+ - Labels: No need, Too trivial, Not urgent, Very urgent, Not sure
358
+ - Evaluates whether the tweet should be reviewed by fact-checkers.
359
+
360
+ 6. **Why might the tweet be harmful?**
361
+ - Labels: No harm, Panic, Hate speech, Rumor, Conspiracy, etc.
362
+ - Categorizes the nature of potential harm the tweet might cause.
363
+
364
+ 7. **Should this tweet get the attention of a government entity?**
365
+ - Labels: Not interesting, Calls for action, Blames authorities, etc.
366
+ - Determines if the tweet should be flagged for government attention.
367
+
368
+ ## Dataset Examples
369
+
370
+ An example from the dataset:
371
+
372
+ > **Tweet**: "Please don’t take hydroxychloroquine (Plaquenil) plus Azithromycin for #COVID19 UNLESS your doctor prescribes it. Both drugs affect the QT interval of your heart and can lead to arrhythmias and sudden death, especially if you are taking other meds or have a heart condition."
373
+
374
+ **Labels**:
375
+ - Q1: Yes
376
+ - Q2: No, probably contains no false information
377
+ - Q3: Yes, definitely of interest
378
+ - Q4: No, probably not harmful
379
+ - Q5: Yes, very urgent
380
+ - Q6: No, not harmful
381
+ - Q7: Yes, calls for action
382
+
383
+ ## Data Statistics
384
+
385
+ - **Arabic**: 5,000 binary samples, 4,000 multiclass samples
386
+ - **Bulgarian**: 3,000 binary samples, 2,500 multiclass samples
387
+ - **Dutch**: 4,000 binary samples, 3,500 multiclass samples
388
+ - **English**: 6,000 binary samples, 5,000 multiclass samples
389
+ - **Multilang**: Combined data from all languages, provided in both binary and multiclass splits.
390
+
391
+ ## License
392
+
393
+ This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
394
+
395
+ ## Citation
396
+
397
+ If you use this dataset, please cite it as:
398
+
399
+ ```
400
+ @inproceedings{alam-etal-2021-fighting-covid,
401
+ title = "Fighting the {COVID}-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society",
402
+ author = "Alam, Firoj and
403
+ Shaar, Shaden and
404
+ Dalvi, Fahim and
405
+ Sajjad, Hassan and
406
+ Nikolov, Alex and
407
+ Mubarak, Hamdy and
408
+ Da San Martino, Giovanni and
409
+ Abdelali, Ahmed and
410
+ Durrani, Nadir and
411
+ Darwish, Kareem and
412
+ Al-Homaid, Abdulaziz and
413
+ Zaghouani, Wajdi and
414
+ Caselli, Tommaso and
415
+ Danoe, Gijs and
416
+ Stolk, Friso and
417
+ Bruntink, Britt and
418
+ Nakov, Preslav",
419
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
420
+ month = nov,
421
+ year = "2021",
422
+ address = "Punta Cana, Dominican Republic",
423
+ publisher = "Association for Computational Linguistics",
424
+ url = "https://aclanthology.org/2021.findings-emnlp.56",
425
+ doi = "10.18653/v1/2021.findings-emnlp.56",
426
+ pages = "611--649",
427
+
428
+ }
429
+
430
+ @inproceedings{alam2021fighting,
431
+ title={Fighting the COVID-19 infodemic in social media: A holistic perspective and a call to arms},
432
+ author={Alam, Firoj and Dalvi, Fahim and Shaar, Shaden and Durrani, Nadir and Mubarak, Hamdy and Nikolov, Alex and Da San Martino, Giovanni and Abdelali, Ahmed and Sajjad, Hassan and Darwish, Kareem and others},
433
+ booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
434
+ volume={15},
435
+ pages={913--922},
436
+ year={2021}
437
+ }
438
+ ```