wanchichen
commited on
Commit
•
aa38e97
1
Parent(s):
66a15b1
Update README.md
Browse files
README.md
CHANGED
@@ -4055,7 +4055,7 @@ It can be used for language identification, spoken language modelling, or speech
|
|
4055 |
|
4056 |
MMS ulab v2 is a reproduced and extended version of the MMS ulab dataset originally proposed in [Scaling Speech Technology to 1000+ Languages](https://arxiv.org/abs/2305.13516), covering more languages and containing more data.
|
4057 |
This dataset includes the raw unsegmented audio in a 16kHz single channel format. It can be segmented into utterances with a voice activity detection (VAD) model such as [this one](https://github.com/wiseman/py-webrtcvad).
|
4058 |
-
We use 6700 hours of MMS ulab v2 (post-segmentation) to train [XEUS](), a multilingual speech encoder for 4000+ languages.
|
4059 |
|
4060 |
For more details about the dataset and its usage, please refer to our [paper]().
|
4061 |
|
|
|
4055 |
|
4056 |
MMS ulab v2 is a reproduced and extended version of the MMS ulab dataset originally proposed in [Scaling Speech Technology to 1000+ Languages](https://arxiv.org/abs/2305.13516), covering more languages and containing more data.
|
4057 |
This dataset includes the raw unsegmented audio in a 16kHz single channel format. It can be segmented into utterances with a voice activity detection (VAD) model such as [this one](https://github.com/wiseman/py-webrtcvad).
|
4058 |
+
We use 6700 hours of MMS ulab v2 (post-segmentation) to train [XEUS](https://huggingface.co/espnet/xeus), a multilingual speech encoder for 4000+ languages.
|
4059 |
|
4060 |
For more details about the dataset and its usage, please refer to our [paper]().
|
4061 |
|