형규 송 commited on
Commit
e0a78a8
1 Parent(s): 5e12b9f

embed youtube video (`231f6ba` in https://bitbucket.org/maum-system/cvpr22-demo-gradio)

Browse files
Files changed (1) hide show
  1. docs/article.md +7 -0
docs/article.md CHANGED
@@ -1,6 +1,13 @@
1
 
2
  ## Why learn a new language, when your model can learn it for you?
3
 
 
 
 
 
 
 
 
4
  ### Abstract
5
 
6
  Recent studies in talking face generation have focused on building a train-once-use-everywhere model i.e. a model that will generalize from any source speech to any target identity. A number of works have already claimed this functionality and have added that their models will also generalize to any language. However, we show, using languages from different language families, that these models do not translate well when the training language and the testing language are sufficiently different. We reduce the scope of the problem to building a language-robust talking face generation system on seen identities i.e. the target identity is the same as the training identity. In this work, we introduce a talking face generation system that will generalize to different languages. We evaluate the efficacy of our system using a multilingual text-to-speech system. We also discuss the usage of joint text-to-speech system and the talking face generation system as a neural dubber system.
 
1
 
2
  ## Why learn a new language, when your model can learn it for you?
3
 
4
+ <div style="max-width: 720px;max-height: 405px;margin: auto;">
5
+ <div style="float: none;clear: both;position: relative;padding-bottom: 56.25%;height: 0;width: 100%">
6
+ <iframe width="720" height="405" src="https://www.youtube.com/embed/toqdD1F_ZsU" title="YouTube video player" style="position: absolute;top: 0;left: 0;width: 100%;height: 100%;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>
7
+ </iframe>
8
+ </div>
9
+ </div>
10
+
11
  ### Abstract
12
 
13
  Recent studies in talking face generation have focused on building a train-once-use-everywhere model i.e. a model that will generalize from any source speech to any target identity. A number of works have already claimed this functionality and have added that their models will also generalize to any language. However, we show, using languages from different language families, that these models do not translate well when the training language and the testing language are sufficiently different. We reduce the scope of the problem to building a language-robust talking face generation system on seen identities i.e. the target identity is the same as the training identity. In this work, we introduce a talking face generation system that will generalize to different languages. We evaluate the efficacy of our system using a multilingual text-to-speech system. We also discuss the usage of joint text-to-speech system and the talking face generation system as a neural dubber system.