Update about.md
#8
by
martillopartbsc
- opened
about.md
CHANGED
@@ -1,16 +1,18 @@
|
|
1 |
## 📄 About
|
2 |
Natural and efficient TTS in Catalan: using Matcha-TTS with the Catalan language.
|
3 |
|
4 |
-
Here you'll be able to find all the information regarding our
|
5 |
|
6 |
## Table of Contents
|
7 |
<details>
|
8 |
<summary>Click to expand</summary>
|
9 |
|
10 |
- [General Model Description](#general-model-description)
|
11 |
-
- [Adaptation to Catalan](#adaptation-to-catalan)
|
12 |
- [Intended Uses and Limitations](#intended-uses-and-limitations)
|
13 |
- [Samples](#samples)
|
|
|
|
|
|
|
14 |
- [Citation](#citation)
|
15 |
- [Additional Information](#additional-information)
|
16 |
|
@@ -18,44 +20,14 @@ Here you'll be able to find all the information regarding our model, which has b
|
|
18 |
|
19 |
## General Model Description
|
20 |
|
21 |
-
|
22 |
-
The encoder part processes input sequences of phonemes and, together with a phoneme duration predictor, outputs averaged acoustic features. And the decoder,
|
23 |
-
which is essentially a U-Net backbone based on the Transfomer architecture, predicts the refined spectrogram.
|
24 |
-
The model is trained with optimal-transport conditional flow matching.
|
25 |
-
This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps.
|
26 |
|
27 |
-
|
28 |
-
Unlike other typical GAN-based vocoders, Vocos does not model audio samples in the time domain.
|
29 |
-
Instead, it generates spectral coefficients, facilitating rapid audio reconstruction through inverse Fourier transform.
|
30 |
-
The goal of this model is to provide an alternative to hifi-gan that is faster and compatible with the acoustic output of several TTS models.
|
31 |
-
This version is tailored for the Catalan language, as it was trained only on Catalan speech datasets.
|
32 |
|
|
|
|
|
|
|
33 |
|
34 |
-
## Adaptation to Catalan
|
35 |
-
|
36 |
-
The original Matcha-TTS model excels in English, but to bring its capabilities to Catalan, a multi-step process was undertaken. Firstly, we fine-tuned the model from English to Catalan central, which laid the groundwork for understanding the language's nuances. This first fine-tuning was done using two datasets:
|
37 |
-
|
38 |
-
* [Our version of the openslr-slr69 dataset.](https://huggingface.co/datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised)
|
39 |
-
|
40 |
-
* A studio-recorded dataset of central catalan, which will soon be published.
|
41 |
-
|
42 |
-
This soon to be published dataset also included recordings of three different dialects:
|
43 |
-
|
44 |
-
* Valencian
|
45 |
-
|
46 |
-
* Occidental
|
47 |
-
|
48 |
-
* Balear
|
49 |
-
|
50 |
-
With a male and a female speaker for each dialect.
|
51 |
-
|
52 |
-
Then, through fine-tuning for these specific Catalan dialects, the model adapted to regional variations in pronunciation and cadence. This meticulous approach ensures that the model reflects the linguistic richness and cultural diversity within the Catalan-speaking community, offering seamless communication in previously underserved dialects.
|
53 |
-
|
54 |
-
In addition to training the Matcha-TTS model for Catalan, integrating the eSpeak phonemizer played a crucial role in enhancing the naturalness and accuracy of generated speech. A TTS (Text-to-Speech) system comprises several components, each contributing to the overall quality of synthesized speech. The first component involves text preprocessing, where the input text undergoes normalization and linguistic analysis to identify words, punctuation, and linguistic features. Next, the text is converted into phonemes, the smallest units of sound in a language, through a process called phonemization. This step is where the eSpeak phonemizer shines, as it accurately converts Catalan text into phonetic representations, capturing the subtle nuances of pronunciation specific to Catalan. You can find the espeak version we used [here](https://github.com/projecte-aina/espeak-ng/tree/dev-ca).
|
55 |
-
|
56 |
-
After phonemization, the phonemes are passed to the synthesis component, where they are transformed into audible speech. Here, the Matcha-TTS model takes center stage, generating high-quality speech output based on the phonetic input. The model's training, fine-tuning, and adaptation to Catalan ensure that the synthesized speech retains the natural rhythm, intonation, and pronunciation patterns of the language, thereby enhancing the overall user experience.
|
57 |
-
|
58 |
-
Finally, the synthesized speech undergoes post-processing, where prosodic features such as pitch, duration, and emphasis are applied to further refine the output and make it sound more natural and expressive. By integrating the eSpeak phonemizer into the TTS pipeline and adapting it for Catalan, alongside training the Matcha-TTS model for the language, we have created a comprehensive and effective system for generating high-quality Catalan speech. This combination of advanced techniques and meticulous attention to linguistic detail is instrumental in bridging language barriers and facilitating communication for Catalan speakers worldwide.
|
59 |
|
60 |
## Intended Uses and Limitations
|
61 |
|
@@ -66,9 +38,6 @@ its output into a speech waveform.
|
|
66 |
The quality of the samples can vary depending on the speaker.
|
67 |
This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker.
|
68 |
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
## Samples
|
73 |
* Female samples
|
74 |
<div class="table-wrapper">
|
@@ -221,10 +190,70 @@ This may be due to the sensitivity of the model in learning specific frequencies
|
|
221 |
</table>
|
222 |
</div>
|
223 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
224 |
## Citation
|
225 |
|
226 |
If this code contributes to your research, please cite the work:
|
227 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
228 |
```
|
229 |
@misc{mehta2024matchatts,
|
230 |
title={Matcha-TTS: A fast TTS architecture with conditional flow matching},
|
|
|
1 |
## 📄 About
|
2 |
Natural and efficient TTS in Catalan: using Matcha-TTS with the Catalan language.
|
3 |
|
4 |
+
Here you'll be able to find all the information regarding our models Matxa 🍵 and alVoCat 🥑 , which have been trained with the use of deep learning. If you want specific information on how to train these model you can find it [here](https://huggingface.co/BSC-LT/matcha-tts-cat-multispeaker) and [here](https://huggingface.co/BSC-LT/vocos-mel-22khz-cat) respectively. The code we've used is also on Github [here](https://github.com/langtech-bsc/Matcha-TTS/tree/dev-cat).
|
5 |
|
6 |
## Table of Contents
|
7 |
<details>
|
8 |
<summary>Click to expand</summary>
|
9 |
|
10 |
- [General Model Description](#general-model-description)
|
|
|
11 |
- [Intended Uses and Limitations](#intended-uses-and-limitations)
|
12 |
- [Samples](#samples)
|
13 |
+
- [Main components](#main-components)
|
14 |
+
- [The model in detail](#the-model-in-detail)
|
15 |
+
- [Adaptation to Catalan](#adaptation-to-catalan)
|
16 |
- [Citation](#citation)
|
17 |
- [Additional Information](#additional-information)
|
18 |
|
|
|
20 |
|
21 |
## General Model Description
|
22 |
|
23 |
+
The significance of open-source text-to-speech (TTS) technologies for minority languages cannot be overstated. These technologies democratize access to TTS solutions by providing a framework for communities to develop and adapt models according to their linguistic needs. This is why we have developed different open-source TTS solutions in Catalan, using an ensemble of technologies.
|
|
|
|
|
|
|
|
|
24 |
|
25 |
+
Firstly, we created a [TTS model for central Catalan](https://huggingface.co/BSC-LT/matcha-tts-cat-multispeaker) by fine-tuning the Matcha-TTS English model. Matcha-TTS is a state-of-the-art model that employs deep learning, a form of AI, to train models that replicate human speech patterns, allowing it to generate lifelike synthetic voices from written text. After that, we fine-tuned this Catalan central model for three other Catalan dialects:
|
|
|
|
|
|
|
|
|
26 |
|
27 |
+
* Balear
|
28 |
+
* North-Occidental
|
29 |
+
* Valencian
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## Intended Uses and Limitations
|
33 |
|
|
|
38 |
The quality of the samples can vary depending on the speaker.
|
39 |
This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker.
|
40 |
|
|
|
|
|
|
|
41 |
## Samples
|
42 |
* Female samples
|
43 |
<div class="table-wrapper">
|
|
|
190 |
</table>
|
191 |
</div>
|
192 |
|
193 |
+
## Main components
|
194 |
+
|
195 |
+
Our text-to-speech model tailored for Catalan employs a multi-step process to convert written text into spoken words with accurate pronunciation. These are the steps:
|
196 |
+
|
197 |
+
1- Initially, the model analyzes the input text, breaking it down into smaller linguistic units such as words and sentences while identifying any special characters. It then utilizes our version of eSpeak, a speech phonemizer, to generate phonemes based on the Catalan language's phonetic rules. For each Catalan accent, certain specifically adapted eSpeak rules apply.
|
198 |
+
|
199 |
+
2- The matcha-TTS model converts these phonemes into a mel spectrogram, a visual representation of the spectrum of frequencies of a sound over time.
|
200 |
+
|
201 |
+
3- This spectrogram is then fed into [our adaptation of the Vocos vocoder](https://huggingface.co/BSC-LT/vocos-mel-22khz-cat), which synthesizes the speech waveform.
|
202 |
+
|
203 |
+
By employing this series of steps, the TTS model ensures accurate pronunciation and natural-sounding Catalan speech output adapted to the nuances of the language. The computing of these steps was performed by Marenostrum 5 from the Barcelona Supercomputing Center, and Finisterrae III from CESGA.
|
204 |
+
|
205 |
+
Together, these technologies form a comprehensive TTS solution tailored to the needs of Catalan speakers, exemplifying the power of open-source initiatives in advancing linguistic diversity and inclusivity.
|
206 |
+
|
207 |
+
|
208 |
+
## The model in detail
|
209 |
+
|
210 |
+
**Matcha-TTS** is an encoder-decoder architecture designed for fast acoustic modelling in TTS.
|
211 |
+
On the one hand, the encoder part is based on a text encoder and a phoneme duration prediction. Together, they predict averaged acoustic features.
|
212 |
+
On the other hand, the decoder has essentially a U-Net backbone inspired by [Grad-TTS](https://arxiv.org/pdf/2105.06337.pdf), which is based on the Transformer architecture.
|
213 |
+
In the latter, by replacing 2D CNNs by 1D CNNs, a large reduction in memory consumption and fast synthesis is achieved.
|
214 |
+
|
215 |
+
**Matcha-TTS** is a non-autorregressive model trained with optimal-transport conditional flow matching (OT-CFM).
|
216 |
+
This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps than models trained using score matching.
|
217 |
+
|
218 |
+
## Adaptation to Catalan
|
219 |
+
|
220 |
+
The original Matcha-TTS model excels in English, but to bring its capabilities to Catalan, a multi-step process was undertaken. Firstly, we fine-tuned the model from English to Catalan central, which laid the groundwork for understanding the language's nuances. This first fine-tuning was done using two datasets:
|
221 |
+
|
222 |
+
* [Our version of the openslr-slr69 dataset.](https://huggingface.co/datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised)
|
223 |
+
|
224 |
+
* A studio-recorded dataset of central catalan, which will soon be published.
|
225 |
+
|
226 |
+
* [Our version of the Festcat dataset.](https://huggingface.co/datasets/projecte-aina/festcat_trimmed_denoised)
|
227 |
+
|
228 |
+
This soon to be published dataset also included recordings of three different dialects:
|
229 |
+
|
230 |
+
* Valencian
|
231 |
+
|
232 |
+
* Occidental
|
233 |
+
|
234 |
+
* Balear
|
235 |
+
|
236 |
+
With a male and a female speaker for each dialect.
|
237 |
+
|
238 |
+
Then, through fine-tuning for these specific Catalan dialects, the model adapted to regional variations in pronunciation and cadence. This meticulous approach ensures that the model reflects the linguistic richness and cultural diversity within the Catalan-speaking community, offering seamless communication in previously underserved dialects.
|
239 |
+
|
240 |
+
In addition to training the Matcha-TTS model for Catalan, integrating the eSpeak phonemizer played a crucial role in enhancing the naturalness and accuracy of generated speech. A TTS (Text-to-Speech) system comprises several components, each contributing to the overall quality of synthesized speech. The first component involves text preprocessing, where the input text undergoes normalization and linguistic analysis to identify words, punctuation, and linguistic features. Next, the text is converted into phonemes, the smallest units of sound in a language, through a process called phonemization. This step is where the eSpeak phonemizer shines, as it accurately converts Catalan text into phonetic representations, capturing the subtle nuances of pronunciation specific to Catalan. You can find the eSpeak version we used [here](https://github.com/projecte-aina/espeak-ng/tree/dev-ca).
|
241 |
+
|
242 |
+
After phonemization, the phonemes are passed to the synthesis component, where they are transformed into audible speech. Here, the Matcha-TTS model takes center stage, generating high-quality speech output based on the phonetic input. The model's training, fine-tuning, and adaptation to Catalan ensure that the synthesized speech retains the natural rhythm, intonation, and pronunciation patterns of the language, thereby enhancing the overall user experience.
|
243 |
+
|
244 |
+
Finally, the synthesized speech undergoes post-processing, where prosodic features such as pitch, duration, and emphasis are applied to further refine the output and make it sound more natural and expressive. By integrating the eSpeak phonemizer into the TTS pipeline and adapting it for Catalan, alongside training the Matcha-TTS model for the language, we have created a comprehensive and effective system for generating high-quality Catalan speech. This combination of advanced techniques and meticulous attention to linguistic detail is instrumental in bridging language barriers and facilitating communication for Catalan speakers worldwide.
|
245 |
+
|
246 |
## Citation
|
247 |
|
248 |
If this code contributes to your research, please cite the work:
|
249 |
|
250 |
+
```
|
251 |
+
@misc{LTU2024,
|
252 |
+
title={Natural and efficient TTS in Catalan: using Matcha-TTS with the Catalan language},
|
253 |
+
author={The Language Technologies Unit from Barcelona Supercomputing Center},
|
254 |
+
year={2024},
|
255 |
+
}
|
256 |
+
```
|
257 |
```
|
258 |
@misc{mehta2024matchatts,
|
259 |
title={Matcha-TTS: A fast TTS architecture with conditional flow matching},
|