StyleTTS-VC

One-shot voice conversion (VC) aims to convert speech from any source speaker to an arbitrary target speaker with only one reference speech audio from the target speaker, which relies heavily on disentangling the speaker’s identity and speech content, a task that still remains challenging. Here, we propose a novel approach to learning disentangled speech representation by transfer learning from style-based text-to-speech (TTS) models. With cycle consistent and adversarial training, the style-based TTS models can perform transcription-guided one-shot VC with high fidelity and similarity. By learning an additional mel-spectrogram encoder through a teacher-student knowledge transfer and novel data augmentation scheme, our approach results in disentangled speech representation without needing the input text. The subjective evaluation shows that our approach can significantly outperform the previous state-of-the-art one-shot voice conversion models in both naturalness and similarity.


Any-to-Any Conversion

All of the following audios are converted from an unseen speaker to another unseen speaker during training. For a fair comparison to the baseline models, all audios are downsampled to 16k Hz. The input to VC models was trimmed so the output has a different length from the input.

All utterances are completely unseen during training, and the results are uncurated (NOT cherry-picked) unless otherwise specified.

For more audio samples, please go to our survey used for MOS evaluation here. You may have to randomly select some answers before proceeding to the next page.

Sample 1 and 2

  Sample 1 (p294 → p326) Sample 2 (p261 → p225)
Source
Target
AGAIN-VC
VQMIVC-VC
YourTTS
StyleTTS-VC (Proposed)

Sample 3 and 4

  Sample 3 (p225 → p245) Sample 4 (p261 → p234)
Source
Target
AGAIN-VC
VQMIVC-VC
YourTTS
StyleTTS-VC (Proposed)

Sample 5 and 6

  Sample 5 (p238 → p347) Sample 6 (p302 → p238)
Source
Target
AGAIN-VC
VQMIVC-VC
YourTTS
StyleTTS-VC (Proposed)

Ablation Study

We present four samples of ablation study for conditions described in Table 2 in our paper on VCTK dataset.

Sample 1 and 2

  Sample 1 Sample 2
Source
Target
Baseline
No Augmentation
No MI Loss
No Cycle Loss
With Latent Loss

Sample 3 and 4

  Sample 3 Sample 4
Source
Target
Baseline
No Augmentation
No MI Loss
No Cycle Loss
With Latent Loss