Update README.md
Browse files
README.md
CHANGED
@@ -5,8 +5,64 @@ tags:
|
|
5 |
- fairseq
|
6 |
- audio
|
7 |
- text-to-speech
|
8 |
-
language:
|
9 |
datasets:
|
10 |
-
-
|
|
|
11 |
---
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- fairseq
|
6 |
- audio
|
7 |
- text-to-speech
|
8 |
+
language: fr
|
9 |
datasets:
|
10 |
+
- common_voice
|
11 |
+
- css10
|
12 |
---
|
13 |
+
# tts_transformer-fr-cv7_css10
|
14 |
+
|
15 |
+
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
|
16 |
+
- French
|
17 |
+
- Single-speaker male voice
|
18 |
+
- Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10)
|
19 |
+
|
20 |
+
## Usage
|
21 |
+
|
22 |
+
```python
|
23 |
+
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
|
24 |
+
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
|
25 |
+
import IPython.display as ipd
|
26 |
+
|
27 |
+
|
28 |
+
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
|
29 |
+
"facebook/tts_transformer-fr-cv7_css10",
|
30 |
+
arg_overrides={"vocoder": "hifigan", "fp16": False}
|
31 |
+
)
|
32 |
+
model = models[0]
|
33 |
+
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
|
34 |
+
generator = task.build_generator(model, cfg)
|
35 |
+
|
36 |
+
text = "Hello, this is a test run."
|
37 |
+
|
38 |
+
sample = TTSHubInterface.get_model_input(task, text)
|
39 |
+
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
|
40 |
+
|
41 |
+
ipd.Audio(wav, rate=rate)
|
42 |
+
```
|
43 |
+
|
44 |
+
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md).
|
45 |
+
|
46 |
+
## Citation
|
47 |
+
|
48 |
+
```bibtex
|
49 |
+
@inproceedings{wang-etal-2021-fairseq,
|
50 |
+
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
|
51 |
+
author = "Wang, Changhan and
|
52 |
+
Hsu, Wei-Ning and
|
53 |
+
Adi, Yossi and
|
54 |
+
Polyak, Adam and
|
55 |
+
Lee, Ann and
|
56 |
+
Chen, Peng-Jen and
|
57 |
+
Gu, Jiatao and
|
58 |
+
Pino, Juan",
|
59 |
+
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
|
60 |
+
month = nov,
|
61 |
+
year = "2021",
|
62 |
+
address = "Online and Punta Cana, Dominican Republic",
|
63 |
+
publisher = "Association for Computational Linguistics",
|
64 |
+
url = "https://aclanthology.org/2021.emnlp-demo.17",
|
65 |
+
doi = "10.18653/v1/2021.emnlp-demo.17",
|
66 |
+
pages = "143--152",
|
67 |
+
}
|
68 |
+
```
|