ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Harp-Net: Hyper-Autoencoded Reconstruction Propagation for Scalable Neural Audio Coding
Cited 16 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Darius Petermann, Seungkwon Beack, Minje Kim
Issue Date
2021-10
Citation
Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) 2021, pp.1-5
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/WASPAA52581.2021.9632723
Abstract
We propose a novel autoencoder architecture that improves the architectural scalability of general-purpose neural audio coding models. An autoencoder-based codec employs quantization to turn its bottleneck layer activation into bitstrings, a process that hinders information flow between the encoder and decoder parts. To circumvent this issue, we employ additional skip connections between the corresponding pair of encoder-decoder layers. The assumption is that, in a mirrored autoencoder topology, a decoder layer reconstructs the intermediate feature representation of its corresponding encoder layer. Hence, any additional information directly propagated from the corresponding encoder layer helps the reconstruction. We implement this kind of skip connections in the form of additional autoencoders, each of which is a small codec that compresses the massive data transfer between the paired encoder-decoder layers. We empirically verify that the proposed hyper-autoencoded architecture improves perceptual audio quality compared to an ordinary autoencoder baseline.
KSP Keywords
Audio coding, Audio quality, Data transfer, Encoder and Decoder, Feature Representation, Information Flow, Massive Data, additional information, coding models, skip connections