ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 A Dual-Staged Context Aggregation Method towards Efficient End-to-End Speech Enhancement
Cited 4 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
Kai Zhen, 이미숙, 김민제
발행일
202005
출처
International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020, pp.366-370
DOI
https://dx.doi.org/10.1109/ICASSP40776.2020.9054499
협약과제
19HR2500, [통합과제] 초실감 테라미디어를 위한 AV부호화 및 LF미디어 원천기술 개발, 최진수
초록
In speech enhancement, an end-to-end deep neural network converts a noisy speech signal to a clean speech directly in the time domain without time-frequency transformation or mask estimation. However, aggregating contextual information from a high-resolution time domain signal with an affordable model complexity still remains challenging. In this paper, we propose a densely connected convolutional and recurrent network (DCCRN), a hybrid architecture, to enable dual-staged temporal context aggregation. With the dense connectivity and cross-component identical shortcut, DCCRN consistently outperforms competing convolutional baselines with an average STOI improvement of 0.23 and PESQ of 1.38 at three SNR levels. The proposed method is computationally efficient with only 1.38 million parameters. The generalizability performance on the unseen noise types is still decent considering its low complexity, although it is relatively weaker comparing to Wave-U-Net with 7.25 times more parameters.
KSP 제안 키워드
Clean speech, Computationally Efficient, Contextual information, Deep neural network(DNN), End to End(E2E), High-resolution, Hybrid architecture, Noisy speech signal, Recurrent network, Resolution time, Time-Frequency Transformation