dataset_info:
features:
- name: file_name
dtype: string
- name: label
dtype: string
- name: audio
dtype: audio
- name: city
dtype: string
- name: location_id
dtype: string
splits:
- name: train
num_bytes: 11755015136.34
num_examples: 6122
- name: test
num_bytes: 4834872627.026
num_examples: 2518
download_size: 15955243030
dataset_size: 16589887763.366001
Dataset Card for "TUT-urban-acoustic-scenes-2018-development-16bit"
Dataset Description
- Homepage: https://zenodo.org/record/1228142
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/)
Dataset Summary
TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:
Airport - airport
Indoor shopping mall - shopping_mall
Metro station - metro_station
Pedestrian street - street_pedestrian
Public square - public_square
Street with medium level of traffic - street_traffic
Travelling by a tram - tram
Travelling by a bus - bus
Travelling by an underground metro - metro
Urban park - park
Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version of the original dataset.
The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018. The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.
Supported Tasks and Leaderboards
audio-classification
: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name.- The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard
- which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Dataset Structure
Data Instances
{'file_name': 'audio/airport-barcelona-0-0-a.wav',
'label': 'airport',
'audio': {'path': 'airport-barcelona-0-0-a.wav',
'array': array([-2.13623047e-04, -1.37329102e-04, -2.13623047e-04, ...,
3.05175781e-05, -6.10351562e-05, -6.10351562e-05]),
'sampling_rate': 48000},
'city': 'barcelona',
'location_id': '0'}
Data Fields
file_name
: name of the audio filelabel
: acoustic scene label from the 10 class set,location_id
: city-location id '0',city
: name of the city where the audio was recorded
Filenames of the dataset have the following pattern:
[scene label]-[city]-[location id]-[segment id]-[device id].wav
Data Splits
A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.
Scene class | Train / Segments | Train / Locations | Test / Segments | Test / Locations |
---|---|---|---|---|
Airport | 599 | 15 | 265 | 7 |
Bus | 622 | 26 | 242 | 10 |
Metro | 603 | 20 | 261 | 9 |
Metro station | 605 | 28 | 259 | 12 |
Park | 622 | 18 | 242 | 7 |
Public square | 648 | 18 | 216 | 6 |
Shopping mall | 585 | 16 | 279 | 6 |
Street, pedestrian | 617 | 20 | 247 | 8 |
Street, traffic | 618 | 18 | 246 | 7 |
Tram | 603 | 24 | 261 | 11 |
Total | 6122 | 203 | 2518 | 83 |
Dataset Creation
Source Data
Initial Data Collection and Normalization
The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.
The equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.
Annotations
Annotation process
Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.
Who are the annotators?
- Ronal Bejarano Rodriguez
- Eemi Fagerlund
- Aino Koskimies
- Toni Heittola
Personal and Sensitive Information
The material was screened for content, and segments containing close microphone conversation were eliminated.
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/) Annamaria Mesaros ([email protected], http://www.cs.tut.fi/~mesaros/) Tuomas Virtanen ([email protected], http://www.cs.tut.fi/~tuomasv/)
Licensing Information
Copyright (c) 2018 Tampere University of Technology and its licensors All rights reserved. Permission is hereby granted, without written agreement and without license or royalty fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document and composed of audio and metadata. This grant is only for experimental and non-commercial purposes, provided that the copyright notice in its entirety appear in all copies of this Work, and the original source of this Work, (Audio Research Group from Laboratory of Signal Processing at Tampere University of Technology), is acknowledged in any publication that reports research using this Work. Any commercial use of the Work or any part thereof is strictly prohibited. Commercial use include, but is not limited to:
- selling or reproducing the Work
- selling or distributing the results or content achieved by use of the Work
- providing services by using the Work.
IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Citation Information
Contributions
Thanks to @wtdog for adding this dataset.