Datasets:
metadata
language:
- en
- pl
- ko
- de
- es
pretty_name: 'SAMSEMO: New dataset for multilingual and multimodal emotion recognition'
task_categories:
- image-classification
- video-classification
tags:
- video
size_categories:
- 10K<n<100K
Dataset Card for Dataset Name
SAMSEMO: New dataset for multilingual and multimodal emotion recognition
Dataset Details
Dataset Sources
- Repository: https://github.com/samsungnlp/samsemo
- Paper: SAMSEMO: New dataset for multilingual and multimodal emotion recognition
Dataset Structure
SAMSEMO/
├── data - zipped directories for each language with files: jpg, mp4, wav
│ ├── pkl_files - files in the pkl format (each language directory from data directory after processing to pkl format)
├── metadata - directory with metadata
├── samsemo.tsv - metadata file (described below)
└── splits - txt files with splits (list of ids) for each language
Annotations
SAMSEMO metadata file is a .tsv file containing several columns:
- utterance_id – alphanumerical id of the video scene. It consists of ID of the source video followed by the underscore and the number indicating the scene (utterance taken from a given movie)
- movie_title – the title of the source video, according to the website it was taken from
- movie_link – the link leading to the source video source_scene_start, source_scene_stop – the beginning and ending of the scene determined in the preliminary annotation. The annotators provided time in hh:mm:ss format, without milliseconds. We cut out the scenes, determining the start on the beginning of the first second (ss.00), and the end on the end of the last second (ss.99). Later on, the scenes were adjusted to eliminate the redundant fragments.
- language – the language of the scene: EN = English, DE = German, ES = Spanish; PL = Polish, KO = Korean
- sex – sex of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: male, female, other.
- age – approximate age of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: adolescent, adult, elderly.
- race – race of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: asian, black, hispanic, white, other.
- covered_face – label indicating if speaker’s face is partially covered, e.g. by their hands, scarf, face mask etc. No = the face is not covered, Yes = the face is covered
- multiple_faces – label indicating if the is one person or more shown in the scene. No = one person, Yes = multiple people.
- emotion_1_annotator_1, emotion_2_annotator_1 – emotion labels assigned to the scene by the first annotator.
- emotion_1_annotator_2, emotion_2_annotator_2 -– emotion labels assigned to the scene by the second annotator.
- emotion_1_annotator_3, emotion_2_annotator_3 – emotion labels assigned to the scene by the third annotator.
- aggregated_emotions – final emotions assigned to the video scene. If two or three annotators assigned a certain label to the scene, this label is included in the final aggregation, hence is present in this column.
- annotator_1, annotator_2, annotator_3 – anonymized IDs of the annotators.
- transcript – the text of the utterance from the scene. It is an output of the ASR, subsequently verified manually.
- translation_de, translation_en, translation_es, translation_ko , translation_pl – the translation of the text to other languages used in this dataset. Note that this is has been done by the machine translation engine and has not been manually verified.
- duration – the duration of the scene in the following format: hh:mm:ss.ms
- movie_type – the type of the source video from which the scene was taken. Possible categories: advertisement, debate, documentary, interview, lecture, monologue, movie, news, speech, stand-up, theatrical play, vlog, web or TV show, workout.
- license – the license under which we share the video scene. Note that the metadata are shared under the CC BY-NC-SA 4.0 license (see DISCLAIMER).
- author – the author of the video, identified by us to the best of our knowledge on the basis of the data provided on the websites from which the videos were taken.
DISCLAIMER
- Please note that the metadata provided for each scene include labels referring to gender of the speakers. The annotators were asked to provide such labels so that SAMSEMO could be verified in terms of gender representation (males 57.32%, females 42.51%, other 0.17%). The same applies to race information: annotators were asked to label the presumed race of the speakers using a restricted number of labels so that SAMSEMO could be assessed in terms of racial representation (we did not have access to self-reports of speakers in this regard). We acknowledge that both concepts are shaped by social and cultural circumstances and the labels provided in SAMSEMO are based on subjective perceptions and individual experience of annotators. Thus, the metadata provided should be approached very carefully in future studies.
- The movie license information provided in SAMSEMO has been collected with due diligence. All video material is shared under its original licenses. However, if any video materials included in the SAMSEMO dataset infringe your copyright by any means, please send us a takedown notice containing the movie title(s) and movie link(s). Please include also a statement by you under penalty or perjury that the information in your notice is accurate and that you are the copyright owner or authorized to act on the copyright owner's behalf.
- All SAMSEMO metadata (emotion annotation, transcript and speaker information) are shared under the CC BY-NC-SA 4.0 license.
Citation
@inproceedings{samsemo24_interspeech,
title = {SAMSEMO: New dataset for multilingual and multimodal emotion recognition},
author = {Pawel Bujnowski and Bartlomiej Kuzma and Bartlomiej Paziewski and Jacek Rutkowski and Joanna Marhula and Zuzanna Bordzicka and Piotr Andruszkiewicz},
year = {2024},
booktitle = {Interspeech 2024},
pages = {2925--2929},
doi = {10.21437/Interspeech.2024-212},
}