ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Weak-to-Strong Compositional Learning from Generative Models for Language-based Object Detection
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Kwanyong Park, Kuniaki Saito, Donghyun Kim
Issue Date
2024-10
Citation
European Conference on Computer Vision (ECCV) 2024, pp.1-19
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1007/978-3-031-73337-6_1
Abstract
Vision-language (VL) models often exhibit a limited understanding of complex expressions of visual objects (e.g., attributes, shapes, and their relations), given complex and diverse language queries. Traditional approaches attempt to improve VL models using hard negative synthetic text, but their effectiveness is limited. In this paper, we harness the exceptional compositional understanding capabilities of generative foundational models. We introduce a novel method for structured synthetic data generation aimed at enhancing the compositional understanding of VL models in language-based object detection. Our framework generates densely paired positive and negative triplets (image, text descriptions, and bounding boxes) in both image and text domains. By leveraging these synthetic triplets, we transform ‘weaker’ VL models into ‘stronger’ models in terms of compositional understanding, a process we call “Weak-to-Strong Compositional Learning” (WSCL). To achieve this, we propose a new compositional contrastive learning formulation that discovers semantics and structures in complex descriptions from synthetic triplets. As a result, VL models trained with our synthetic data generation exhibit a significant performance boost in the Omnilabel benchmark by up to +5AP and the D3 benchmark by +6.9AP upon existing baselines.
KSP Keywords
Bounding Box, Generative models, Positive and negative, novel method, object detection, positive and, synthetic data generation