ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Harp-Net: Hyper-Autoencoded Reconstruction Propagation for Scalable Neural Audio Coding
Cited 4 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
Darius Petermann, 백승권, 김민제
발행일
202110
출처
Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) 2021, pp.1-5
DOI
https://dx.doi.org/10.1109/WASPAA52581.2021.9632723
초록
We propose a novel autoencoder architecture that improves the architectural scalability of general-purpose neural audio coding models. An autoencoder-based codec employs quantization to turn its bottleneck layer activation into bitstrings, a process that hinders information flow between the encoder and decoder parts. To circumvent this issue, we employ additional skip connections between the corresponding pair of encoder-decoder layers. The assumption is that, in a mirrored autoencoder topology, a decoder layer reconstructs the intermediate feature representation of its corresponding encoder layer. Hence, any additional information directly propagated from the corresponding encoder layer helps the reconstruction. We implement this kind of skip connections in the form of additional autoencoders, each of which is a small codec that compresses the massive data transfer between the paired encoder-decoder layers. We empirically verify that the proposed hyper-autoencoded architecture improves perceptual audio quality compared to an ordinary autoencoder baseline.
KSP 제안 키워드
Audio coding, Audio quality, Data transfer, Encoder and Decoder, Feature representation, Massive Data, additional information, coding models, information flow, skip connections