Robust Training of Singing Voice Synthesis Using Prior and Posterior Uncertainty

(ASRU 2025)

Yiwen Zhao, Jiatong Shi, Yuxun Tang, William Chen, Shinji Watanabe.

Abstract

Singing voice synthesis (SVS) has seen remarkable advancements in recent years. However, compared to speech and general audio data, publicly available singing datasets remain limited. In practice, this data scarcity often leads to performance degradation in long-tail scenarios, such as imbalanced pitch distributions or rare singing styles. To mitigate these challenges, we propose uncertainty-based optimization to improve the training process of end-to-end SVS models. First, we introduce differentiable data augmentation in the adversarial training, which operates in a sample-wise manner to increase the prior uncertainty. Second, we incorporate a frame-level uncertainty prediction module that estimates the posterior uncertainty, enabling the model to allocate more learning capacity to low-confidence segments. Empirical results on the Opencpop and Ofuton-P, across Chinese and Japanese, demonstrate that our approach improves performance in various perspectives.

Overall workflow

MY ALT TEXT

Demo for SVS Experiments on Opencpop.

More Mel-Spectrogram Examples on Opencpop.