Higobeatz commited on
Commit
14e1f3b
1 Parent(s): 9b99408

Initial commit

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- might put a [width=2000 * height=xxx] img here, this size best fits git page
2
+ <img src="resources\cover.png"> -->
3
+ <img src="resources\dreamvoice.png">
4
+
5
+ # DreamVoice: Text-guided Voice Conversion and Generation
6
+
7
+ --------------------
8
+
9
+ Introduction
10
+
11
+ --------------------
12
+
13
+ ## Demo
14
+
15
+ 🎵 Listen to [examples](https://mydemo.page)
16
+
17
+ ## Model Usage
18
+
19
+ To load the models, you need to install packages:
20
+
21
+ ```
22
+ pip install -r requirements.txt
23
+ ```
24
+
25
+ Then you can use the model with the following code:
26
+
27
+ - Plugin mode (DreamVG + ReDiffVC)
28
+
29
+ ```python
30
+ from dreamvoice import DreamVoice
31
+
32
+ # Initialize DreamVoice in plugin mode with CUDA device
33
+ dreamvoice = DreamVoice(mode='plugin', device='cuda')
34
+ # Description of the target voice
35
+ prompt = 'young female voice, sounds young and cute'
36
+ # Provide the path to the content audio and generate the converted audio
37
+ gen_audio, sr = dreamvoice.genvc('examples/test1.wav', prompt)
38
+ # Save the converted audio
39
+ dreamvoice.save_audio('gen1.wav', gen_audio, sr)
40
+
41
+ # Save the speaker embedding if you like the generated voice
42
+ dreamvoice.save_spk_embed('voice_stash1.pt')
43
+ # Load the saved speaker embedding
44
+ dreamvoice.load_spk_embed('voice_stash1.pt')
45
+ # Use the saved speaker embedding for another audio sample
46
+ gen_audio2, sr = dreamvoice.simplevc('examples/test2.wav', use_spk_cache=True)
47
+ dreamvoice.save_audio('gen2.wav', gen_audio2, sr)
48
+ ```
49
+ - End-to-end mode (DreamVC)
50
+
51
+ ```python
52
+ from dreamvoice import DreamVoice
53
+
54
+ # Initialize DreamVoice in end-to-end mode with CUDA device
55
+ dreamvoice = DreamVoice(mode='end2end', device='cuda')
56
+ # Provide the path to the content audio and generate the converted audio
57
+ gen_end2end, sr = dreamvoice.genvc('examples/test1.wav', prompt)
58
+ # Save the converted audio
59
+ dreamvoice.save_audio('gen_end2end.wav', gen_end2end, sr)
60
+
61
+ # Note: End-to-end mode does not support saving speaker embeddings
62
+ # To use a voice generated in end-to-end mode, switch back to plugin mode
63
+ # and extract the speaker embedding from the generated audio
64
+ # Switch back to plugin mode
65
+ dreamvoice = DreamVoice(mode='plugin', device='cuda')
66
+ # Load the speaker audio from the previously generated file
67
+ gen_end2end2, sr = dreamvoice.simplevc('examples/test2.wav', speaker_audio='gen_end2end.wav')
68
+ # Save the new converted audio
69
+ dreamvoice.save_audio('gen_end2end2.wav', gen_end2end2, sr)
70
+ ```
71
+
72
+ - One-shot Voice Conversion (ReDiffVC)
73
+
74
+ ```python
75
+ from dreamvoice import DreamVoice
76
+ # Plugin mode can be used for traditional one-shot voice conversion
77
+ dreamvoice = DreamVoice(mode='plugin', device='cuda')
78
+ # Generate audio using traditional one-shot voice conversion
79
+ gen_tradition, sr = dreamvoice.simplevc('examples/test1.wav', speaker_audio='examples/speaker.wav')
80
+ # Save the converted audio
81
+ dreamvoice.save_audio('gen_tradition.wav', gen_tradition, sr)
82
+ ```
83
+
84
+ ## Reference
85
+
86
+ If you find the code useful for your research, please consider citing:
87
+
88
+ ```bibtex
89
+ @article{hai2024dreamvoice,
90
+ title={DreamVoice: Text-Guided Voice Conversion},
91
+ author={Hai, Jiarui and Thakkar, Karan and Wang, Helin and Qin, Zengyi and Elhilali, Mounya},
92
+ journal={arXiv preprint arXiv:2406.16314},
93
+ year={2024}
94
+ }
95
+ ```