ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper LLM4SGG: Large Language Models for Weakly Supervised Scene Graph Generation
Cited 9 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Kibum Kim, Kanghoon Yoon, Jaehyeong Jeon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park
Issue Date
2024-06
Citation
Conference on Computer Vision and Pattern Recognition (CVPR) 2024, pp.28306-28316
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/CVPR52733.2024.02674
Abstract
Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations. In this regard, studies on WSSGG have utilized image captions to obtain unlocalized triplets while primarily focusing on grounding the unlocalized triplets over image regions. However, they have overlooked the two issues involved in the triplet formation process from the captions: 1) Semantic over-simplification issue arises when extracting triplets from captions, where fine- grained predicates in captions are undesirably converted into coarse-grained predi-cates, resulting in a long-tailed predicate distribution, and 2) Low-density scene graph issue arises when aligning the triplets in the caption with entity/predicate classes of interest, where many triplets are discarded and not used in training, leading to insufficient supervision. To tackle the two issues, we propose a new approach, i.e., Large Language Modelfor weakly-supervised SGG (LLM4SGG), where we mitigate the two issues by leveraging the LLM' sin-depth understanding of language and reasoning ability during the extraction of triplets from captions and alignment of entity/predicate classes with target data. To further engage the LLM in these processes, we adopt the idea of Chain-of-Thought and the in-context few-shot learning strategy. To validate the effectiveness of LLM4SGG, we conduct extensive experiments on Visual Genome and GQA datasets, showing significant improvements in both Recall@K and mean Recall@K compared to the state-of-the-art WSSGG methods. A further appeal is that LLM4SGG is data-efficient, enabling effective model training with a small amount of training images. Our code is available on https://github.com/rlqjall07/torch-LLM4SGG
KSP Keywords
Formation process, Language Model, Learning Strategy, New approach, Reasoning ability, Scene graph, Target data, Weakly supervised, coarse-grained, effective model, graph generation