ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 A Study of Evaluation Metrics and Datasets for Video Captioning
Cited 11 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
박재휘, 송치본, 한지형
발행일
201711
출처
International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS) 2017, pp.172-175
DOI
https://dx.doi.org/10.1109/ICIIBMS.2017.8279760
협약과제
17HS2400, 다중소스 데이터 지능형 분석기반 고수준 정보추출 원천기술 연구, 유장희
초록
With the fast growing interest in deep learning, various applications and machine learning tasks are emerged in recent years. Video captioning is especially gaining a lot of attention from both computer vision and natural language processing fields. Generating captions is usually performed by jointly learning of different types of data modalities that share common themes in the video. Learning with the joining representations of different modalities is very challenging due to the inherent heterogeneity resided in the mixed information of visual scenes, speech dialogs, music and sounds, and etc. Consequently, it is hard to evaluate the quality of video captioning results. In this paper, we introduce well-known metrics and datasets for evaluation of video captioning. We compare the the existing metrics and datasets to derive a new research proposal for the evaluation of video descriptions.
KSP 제안 키워드
Computer Vision(CV), Inherent heterogeneity, Natural Language Processing, Research proposal, Video Captioning, Visual scenes, deep learning(DL), evaluation metrics, machine Learning