ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Global-Local Three-Stream Network for Acoustic Scene Classification
Cited 3 time in scopus Download 16 time Share share facebook twitter linkedin kakaostory
Authors
Jo Suhwa, Jeong Chi Yoon, Moon Kyeong Deok, 김채규
Issue Date
202110
Source
International Conference on Information and Communication Technology Convergence (ICTC) 2021, pp.1567-1571
DOI
https://dx.doi.org/10.1109/ICTC52510.2021.9621159
Project Code
21ZS1200, 인간중심의 자율지능시스템 원천기술연구, Jeong Dan Choi
Abstract
Acoustic scene classification can provide important information to many applications by identifying the location and environment in which a given audio signal was recorded. Therefore, significant research has been conducted on the topic. Recently, with the success of deep learning in the field of computer vision, many studies for classifying acoustic scenes have adopted the deep learning framework. The existing methods focus on processing the multiple sources of local information by dividing the input spectrogram and then and using an ensemble model to combine the classification results of all the local information to improve the overall classification accuracy. However, multiple bits of local information cannot provide the global information, which is important for classifying the place in which the audio occurs. Thus, the ensemble model has drawback of increasing the complexity of the classification model. Therefore, in this paper we proposed a global-local three-stream network for classifying the acoustic scene. The proposed method provides global and local information simultaneously by using the entire input spectrogram and its sub-spectrograms, separated by frequency bands. The input streams, including the global stream, high-frequency stream, and low-frequency stream, were encoded by pre-activated ResNet, and then a late fusion scheme was used to classify the acoustic scene using the encoded features. We evaluated the performance of the proposed method for classifying the acoustic scenes and compared it with that of the conventional method on a public dataset. The experimental results show that the proposed method improves the classification accuracy of the acoustic scene when compared with the existing method that only uses multiple bits of local information. Additionally, the experimental results confirmed that the proposed method can reduce the number of the model parameters by 49.7% without a loss of accuracy, as compared to the ensemble model that fuses the results of three models.
KSP Keywords
Acoustic Scene Classification, Audio signal, Classification models, Computer Vision(CV), Conventional methods, Deep learning framework, Ensemble models, Global and local, High Frequency(HF), Local information, Model parameter