ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Image Classification and Captioning Model Considering a CAM-based Disagreement Loss
Cited 4 time in scopus Download 23 time Share share facebook twitter linkedin kakaostory
저자
윤여찬, 박소영, 박수명, 임희석
발행일
202002
출처
ETRI Journal, v.42 no.1, pp.67-77
ISSN
1225-6463
출판사
한국전자통신연구원 (ETRI)
DOI
https://dx.doi.org/10.4218/etrij.2018-0621
협약과제
18HS3100, 디지털콘텐츠 In-House R&D, 박수명
초록
Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.
키워드
deep learning, image captioning, image classification
KSP 제안 키워드
Activation mapping, Classification models, End to End(E2E), Image captioning, Image classification, Joint Learning, Key parts, Learning approach, Multimodal embedding, Proposed model, deep learning(DL)
본 저작물은 공공누리 제4유형 : 출처표시 + 상업적 이용금지 + 변경금지 조건에 따라 이용할 수 있습니다.
제4유형