[1912.10128v1] Learning Singing From Speech
In future work, we will focus on reducing the amount of target speech samples for both target singing synthesis and conversion tasks.
We propose an algorithm that is capable of synthesizing high quality target speaker’s singing voice given only their normal speech samples. The proposed algorithm first integrate speech and singing synthesis into a unified framework, and learns universal speaker embeddings that are shareable between speech and singing synthesis tasks. Specifically, the speaker embeddings learned from normal speech via the speech synthesis objective are shared with those learned from singing samples via the singing synthesis objective in the unified training framework. This makes the learned speaker embedding a transferable representation for both speaking and singing. We evaluate the proposed algorithm on singing voice conversion task where the content of original singing is covered with the timbre of another speaker’s voice learned purely from their normal speech samples. Our experiments indicate that the proposed algorithm generates high-quality singing voices that sound highly similar to target speakers voice given only his or her normal speech samples. We believe that proposed algorithm will open up new opportunities for singing synthesis and conversion for broader users and applications.
‹Fig. 1: Model architecture of DurIAN-4S. (Introduction)Fig. 2: The process diagram of training and converting. The yellow parts are used in training stage, the green parts are used in converting stage and the blue parts are used in both stages. The WaveRNN  model is trained separately. (Alignment model)