legend update
Browse files
README.md
CHANGED
@@ -6,14 +6,14 @@ configs:
|
|
6 |
size_categories:
|
7 |
- n<1K
|
8 |
---
|
9 |
-
Cloned the GitHub repo for easier viewing and embedding the above table as requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
10 |
|
11 |
## Legend
|
12 |
|
13 |
for the above TTS capability table
|
14 |
|
15 |
* Processor ⚡ - Inference done by
|
16 |
-
* CPU (CPU**s** = multithreaded) -
|
17 |
* CUDA by *NVIDIA*™
|
18 |
* ROCm by *AMD*™
|
19 |
* Phonetic alphabet 🔤 - Phonetic transcription that allows to control pronunciation of words before inference
|
@@ -21,15 +21,21 @@ for the above TTS capability table
|
|
21 |
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
22 |
* Insta-clone 👥 - Zero-shot model for quick voice cloning
|
23 |
* Emotion control 🎭 - Able to force an emotional state of speaker
|
24 |
-
*
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
* strict control through prompt 🎭📖 - prompt input parameter
|
27 |
-
* Prompting 📖 - Also a side effect of narrator based datasets and a way to affect the emotional state
|
28 |
-
*
|
29 |
-
*
|
30 |
-
* Streaming support 🌊 -
|
31 |
-
* Speech control 🎚 - Ability to change the pitch, duration
|
32 |
-
* Voice conversion / Speech-To-Speech support 🦜 -
|
33 |
* Longform synthesis - Able to synthesis whole paragraphs
|
34 |
|
35 |
A _null_ value means unfilled/unknown. 🤷♂️ Please create pull requests to update the info on the models.
|
|
|
6 |
size_categories:
|
7 |
- n<1K
|
8 |
---
|
9 |
+
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
|
10 |
|
11 |
## Legend
|
12 |
|
13 |
for the above TTS capability table
|
14 |
|
15 |
* Processor ⚡ - Inference done by
|
16 |
+
* CPU (CPU**s** = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
|
17 |
* CUDA by *NVIDIA*™
|
18 |
* ROCm by *AMD*™
|
19 |
* Phonetic alphabet 🔤 - Phonetic transcription that allows to control pronunciation of words before inference
|
|
|
21 |
* [ARPAbet](https://en.wikipedia.org/wiki/ARPABET) - American English focused phonetics
|
22 |
* Insta-clone 👥 - Zero-shot model for quick voice cloning
|
23 |
* Emotion control 🎭 - Able to force an emotional state of speaker
|
24 |
+
* 🎭 <# emotions>
|
25 |
+
* 😡 anger
|
26 |
+
* 😃 happiness
|
27 |
+
* 😭 sadness
|
28 |
+
* 😯 surprise
|
29 |
+
* 🤫 whispering
|
30 |
+
* 😊 friendlyness
|
31 |
+
* strict insta-clone switch 🎭👥 - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
|
32 |
* strict control through prompt 🎭📖 - prompt input parameter
|
33 |
+
* Prompting 📖 - Also a side effect of narrator based datasets and a way to affect the emotional state
|
34 |
+
* 📖 - Prompt as a separate input parameter
|
35 |
+
* 🗣📖 - The prompt itself is also spoken by TTS; [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion)
|
36 |
+
* Streaming support 🌊 - Can playback audio while it is still being generated
|
37 |
+
* Speech control 🎚 - Ability to change the pitch, duration etc. of generated speech
|
38 |
+
* Voice conversion / Speech-To-Speech support 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
|
39 |
* Longform synthesis - Able to synthesis whole paragraphs
|
40 |
|
41 |
A _null_ value means unfilled/unknown. 🤷♂️ Please create pull requests to update the info on the models.
|