ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Many-to-Many Unsupervised Speech Conversion From Nonparallel Corpora
Cited 2 time in scopus Download 38 time Share share facebook twitter linkedin kakaostory
저자
이윤경, 김현우, 박전규
발행일
202102
출처
IEEE Access, v.9, pp.27278-27286
ISSN
2169-3536
출판사
IEEE
DOI
https://dx.doi.org/10.1109/ACCESS.2021.3058382
협약과제
20ZS1100, 자율성장형 복합인공지능 원천기술 연구, 송화전
초록
We address a nonparallel data-driven many-to-many speech modeling and multimodal style conversion method. In this work, we train a speech conversion model for multiple domains rather than a specific source and target domain pair, and we generate diverse output speech signals from a given source domain speech by transferring some speech style-related characteristics while preserving its linguistic content information. The proposed method comprises a variational autoencoder (VAE)-based many-to-many speech conversion network with a Wasserstein generative adversarial network (WGAN) and a skip-connected autoencoder-based self-supervised learning network. The proposed conversion network trains the models by decomposing the spectral features of the input speech signal into a content factor that represents domain-invariant information and a style factor that represents domain-related information to automatically estimate the various speech styles of each domain, and the network converts the input speech signal to another domain using the computed content factor with the target style factor we want to change. Diverse and multimodal outputs can be generated by sampling different style factors. We also train models in a stable manner and improve the quality of generated outputs by sharing the discriminator of the VAE-based speech conversion network and that of the self-supervised learning network. We apply the proposed method to speaker conversion and perform the perceptual evaluations. Experimental results revealed that the proposed method obtained high accuracy of converted spectra, significantly improved the sound quality and speaker similarity of the converted speech, and contributed to stable model training.
키워드
many-to-many SC, non-parallel SC, self-supervised learning, Speech conversion (SC), variational auto-encoder (VAE), Wasserstein generative adversarial network (WGAN)
KSP 제안 키워드
Auto-Encoder(AE), Conversion method, Data-Driven, High accuracy, Learning network, Linguistic content, Many-to-many, Multiple domains, Source Domain, Speaker conversion, Speaker similarity
본 저작물은 크리에이티브 커먼즈 저작자 표시 - 비영리 - 변경금지 (CC BY NC ND) 조건에 따라 이용할 수 있습니다.
저작자 표시 - 비영리 - 변경금지 (CC BY NC ND)