ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Convolutional Recurrent Neural Networks for Urban Sound Classification using Raw Waveforms
Cited 23 time in scopus Download 0 time Share share facebook twitter linkedin kakaostory
저자
상종희, 박수명, 이준우
발행일
201809
출처
European Signal Processing Conference (EUSIPCO) 2018, pp.2444-2448
DOI
https://dx.doi.org/10.23919/EUSIPCO.2018.8553247
협약과제
18HS3100, 디지털콘텐츠 In-House R&D, 박수명
초록
Recent studies have demonstrated deep learning approaches directly from raw data have been successfully used in image and text. This approach has been applied to audio signals as well but not fully explored yet. In this works, we propose a convolutional recurrent neural network that directly uses time-domain waveforms as input in the domain of urban sound classification. Convolutional recurrent neural network is combined model of convolutional neural networks for extracting sound features and recurrent neural networks for temporal aggregation of the extracted features. The method was evaluated using the UrbanSound8k dataset, the largest public dataset of urban environmental sound sources available for research. The results show how convolutional recurrent neural network with raw waveforms improve the accuracy in urban sound classification and provide effectiveness of its structure with respect to the number of parameters.
KSP 제안 키워드
Audio signal, Convolution neural network(CNN), Environmental sound, Learning approach, Public Datasets, Recurrent Neural Network(RNN), Sound feature, Sound source, Temporal aggregation, Urban sound classification, combined model