ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Sound Event Localization and Detection using Spatial Feature Fusion
Cited 0 time in scopus Download 6 time Share share facebook twitter linkedin kakaostory
저자
조수화, 정치윤, 김무섭
발행일
202210
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2022, pp.1849-1851
DOI
https://dx.doi.org/10.1109/ICTC55196.2022.9952784
협약과제
22PS1900, 자율운항선박 성능실증센터 운용시스템 개발 및 구축, 김무섭
초록
Sound event localization and detection (SELD) can identify the category and location of a sound event along with providing valuable information for many applications. Existing methods primarily use convolutional recurrent neural networks as a network model and Log-Mel spectrograms to classify sound events. However, there are no dominant spatial features to identify the direction of sound events. The fusion of various spatial features can lead to a better performance in the SELD task. In this study, we propose an optimal feature fusion by systematically analyzing various combinations of spatial features. We used the TAU-NIGENS spatial sound events 2021 dataset to evaluate the SELD task performance of various combinations of spatial features. We found that the combination of interaural phase difference (IPD) and sinIPD had a better performance than the other features and combinations. Finally, we confirmed that the proposed features had a better performance than the state-of-the-art methods.
KSP 제안 키워드
Event localization, Feature Fusion, Network model, Optimal feature, Phase Difference, Recurrent Neural Network(RNN), Spatial Sound, sound events, spatial feature, state-of-The-Art, task performance