jnemecek commited on
Commit
312113e
1 Parent(s): 017754a

create dataset card

Browse files
Files changed (1) hide show
  1. README.md +50 -1
README.md CHANGED
@@ -22,4 +22,53 @@ task_ids:
22
  - other-other-keyword-spotting
23
  ---
24
 
25
- ![sil-ai logo](https://s3.amazonaws.com/moonup/production/uploads/1661440873726-6108057a823007eaf0c7bd10.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  - other-other-keyword-spotting
23
  ---
24
 
25
+ # Dataset Card for Audio Keyword Spotting
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+
30
+ ## Dataset Description
31
+
32
+ - **Homepage:** https://sil.ai.org
33
+ - **Point of Contact:** [SIL AI email](mailto:[email protected])
34
+ - **Data Sources:** [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), [trabina GitHub](https://github.com/wswu/trabina)
35
+
36
+ ![sil-ai logo](https://s3.amazonaws.com/moonup/production/uploads/1661440873726-6108057a823007eaf0c7bd10.png)
37
+
38
+ ## Dataset Summary
39
+
40
+ The initial version of this dataset is a subset of [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), which is derived from Common Voice, designed for easier loading. Specifically, the subset consists of `ml_spoken_words` files filtered by the names and placenames transliterated in Bible translations, as found in [trabina](https://github.com/wswu/trabina). For our initial experiment, we have focused only on English, Spanish, and Indonesian, three languages whose name spellings are frequently used in other translations. We anticipate growing this dataset in the future to include additional keywords and other languages as the experiment progresses.
41
+
42
+ ### Data Fields
43
+
44
+ * file: strinrelative audio path inside the archive
45
+ * is_valid: if a sample is valid
46
+ * language: language of an instance.
47
+ * speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
48
+ * gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
49
+ * keyword: word spoken in a current sample
50
+ * audio: a dictionary containing the relative path to the audio file,
51
+ the decoded audio array, and the sampling rate.
52
+ Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
53
+ decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
54
+ a large number of audio files might take a significant amount of time.
55
+ Thus, it is important to first query the sample index before the "audio" column,
56
+ i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
57
+
58
+ ### Data Splits
59
+
60
+ The data for each language is splitted into train / validation / test parts.
61
+
62
+ ## Supported Tasks
63
+ Keyword spotting and spoken term search
64
+
65
+ ### Personal and Sensitive Information
66
+
67
+ The dataset consists of people who have donated their voice online.
68
+ You agree to not attempt to determine the identity of speakers.
69
+
70
+ ### Licensing Information
71
+
72
+ The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
73
+ research and commercial applications in keyword spotting and spoken term search.
74
+