ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper MarginNCE: Robust Sound Localization with a Negative Margin
Cited 10 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Sooyoung Park, Arda Senocak, Joon Son Chung
Issue Date
2023-06
Citation
International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023, pp.1-5
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICASSP49357.2023.10097234
Abstract
The goal of this work is to localize sound sources in visual scenes with a self-supervised approach. Contrastive learning in the context of sound source localization leverages the natural correspondence between audio and visual signals where the audio-visual pairs from the same source are assumed as positive, while randomly selected pairs are negatives. However, this approach brings in noisy correspondences; for example, positive audio and visual pair signals that may be unrelated to each other, or negative pairs that may contain semantically similar samples to the positive one. Our key contribution in this work is to show that using a less strict decision boundary in contrastive learning can alleviate the effect of noisy correspondences in sound source localization. We propose a simple yet effective approach by slightly modifying the contrastive loss with a negative margin. Extensive experimental results show that our approach gives on-par or better performance than the state-of-the-art methods. Furthermore, we demonstrate that the introduction of a negative margin to existing methods results in a consistent improvement in performance.
KSP Keywords
Audio-visual, Similar samples, Sound localization, Visual scenes, Visual signals, decision boundary, sound source localization, state-of-The-Art, supervised approach