The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

This dataset was taken from the creators GitHub repository and converted for my own studying needs.

Dusha dataset

Dusha is a bi-modal corpus suitable for speech emotion recognition (SER) tasks. The dataset consists of about 300 000 audio recordings with Russian speech, their transcripts and emotional labels. The corpus contains approximately 350 hours of data. Four basic emotions that usually appear in a dialog with a virtual assistant were selected: Happiness (Positive), Sadness, Anger and Neutral emotion.

License

English Version

Russian Version

Authors

  • Artem Sokolov
  • Fedor Minkin
  • Nikita Savushkin
  • Nikolay Karpov
  • Oleg Kutuzov
  • Vladimir Kondratenko
Downloads last month
55

Models trained or fine-tuned on KELONMYOSA/dusha_emotion_audio