ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Personalized Neural Speech Codec
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Inseon Jang, Haici Yang, Wootaek Lim, Seungkwon Beack, Minje Kim
Issue Date
2024-04
Citation
International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024, pp.991-995
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICASSP48485.2024.10446067
Abstract
In this paper, we propose a personalized neural speech codec, envisioning that personalization can reduce the model complexity or improve perceptual speech quality. Despite the common usage of speech codecs where only a single talker is involved on each side of the communication, personalizing a codec for the specific user has rarely been explored in the literature. First, we assume speakers can be grouped into smaller subsets based on their perceptual similarity. Then, we also postulate that a group-specific codec can focus on the group’s speech characteristics to improve its perceptual quality and computational efficiency. To this end, we first develop a Siamese network that learns the speaker embeddings from the LibriSpeech dataset, which are then grouped into underlying speaker clusters. Finally, we retrain the LPCNet-based speech codec baselines on each of the speaker clusters. Subjective listening tests show that the proposed personalization scheme introduces model compression while maintaining speech quality. In other words, with the same model complexity, personalized codecs produce better speech quality.
KSP Keywords
Computational Efficiency, Model compression, Perceptual Quality, Siamese network, Speaker Embeddings, model complexity, perceptual similarity, perceptual speech quality, speech characteristics, speech codec, subjective listening tests