ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Can CLIP Help Sound Source Localization?
Cited 0 time in scopus Download 67 time Share share facebook twitter linkedin kakaostory
Authors
Sooyoung Park, Arda Senocak, Joon Son Chung
Issue Date
2024-01
Citation
Winter Conference on Applications of Computer Vision (WACV) 2024, pp.5711-5720
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/WACV57701.2024.00561
Abstract
Large-scale pre-trained image-text models demonstrate remarkable versatility across diverse tasks, benefiting from their robust representational capabilities and effective multimodal alignment. We extend the application of these models, specifically CLIP, to the domain of sound source localization. Unlike conventional approaches, we employ the pre-trained CLIP model without explicit text input, relying solely on the audio-visual correspondence. To this end, we introduce a framework that translates audio signals into tokens compatible with CLIP’s text encoder, yielding audio-driven embeddings. By directly using these embeddings, our method generates audio-grounded masks for the provided audio, extracts audio-grounded image features from the highlighted regions, and aligns them with the audio-driven embeddings using the audio-visual correspondence objective. Our findings suggest that utilizing pre-trained image-text models enable our model to generate more complete and compact localization maps for the sounding objects. Extensive experiments show that our method outperforms state-of-the-art approaches by a significant margin.
KSP Keywords
Audio signal, Audio-visual, Image feature, Image-text, Multimodal alignment, Text input, large-scale, sound source localization, state-of-The-Art