ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Cascaded Cross-Module Residual Learning towards Lightweight End-to-End Speech Coding
Cited 36 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Kai Zhen, Jongmo Sung, Mi Suk Lee, Seungkwon Beack, Minje Kim
Issue Date
2019-09
Citation
International Speech Communication Association (INTERSPEECH) 2019, pp.3396-3400
Publisher
ISCA
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.21437/Interspeech.2019-1816
Abstract
Speech codecs learn compact representations of speech signals to facilitate data transmission. Many recent deep neural network (DNN) based end-to-end speech codecs achieve low bitrates and high perceptual quality at the cost of model complexity. We propose a cross-module residual learning (CMRL) pipeline as a module carrier with each module reconstructing the residual from its preceding modules. CMRL differs from other DNN-based speech codecs, in that rather than modeling speech compression problem in a single large neural network, it optimizes a series of less-complicated modules in a two-phase training scheme. The proposed method shows better objective performance than AMR-WB and the state-of-the-art DNN-based speech codec with a similar network architecture. As an end-to-end model, it takes raw PCM signals as an input, but is also compatible with linear predictive coding (LPC), showing better subjective quality at high bitrates than AMR-WB and OPUS. The gain is achieved by using only 0.9 million trainable parameters, a significantly less complex architecture than the other DNN-based codecs in the literature.
KSP Keywords
AMR-WB, Compact Representation, Complex architecture, Compression problem, Cross-module, Data transmission, Deep neural network(DNN), End to End(E2E), Network Architecture, Perceptual Quality, Speech Signals