Automatic Speech Disentanglement for Voice Conversion using Rank Module and Speech Augmentation

This is the demo page

Abstract

Voice Conversion (VC) converts the voice of a source speech to that of a target while maintaining the source’s content. Speech can be mainly decomposed into four components: content, timbre, rhythm and pitch. Unfortunately, most related works only take into account content and timbre, which results in less natural speech. Some recent works are able to disentangle speech into several components, but they require laborious bottleneck tuning or various hand-crafted features, each assumed to contain disentangled speech information. In this paper, we propose a VC model that can automatically disentangle speech into four components using only two augmentation functions, without the requirement of multiple hand-crafted features or laborious bottleneck tuning. The proposed model is straightforward yet efficient, and the empirical results demonstrate that our model can achieve a better performance than the baseline, regarding disentanglement effectiveness and speech naturalness.

Different Representation Conversion

Source Audio Target Audio Model Type Converted Audio


Baseline Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre
Ours Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre


Baseline Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre
Ours Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre


Baseline Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre
Ours Pitch
Rhythm
Timbre
Pitch+Rhythm
Pitch+Timbre
Rhythm+Timbre
Pitch+Rhythm+Timbre