ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Video Scene Analysis of Interactions between Humans and Vehicles Using Event Context
Cited 5 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
유상원, 이종택, J. K. Aggarwal
발행일
201007
출처
International Conference on Image and Video Retrieval (CIVR) 2010, pp.462-469
DOI
https://dx.doi.org/10.1145/1816041.1816109
협약과제
10MC4600, 실외환경에 강인한 도로 기반 저가형 자율주행기술 개발, 유원필
초록
We present a methodology to estimate a detailed state of a video scene involving multiple humans and vehicles. In order to automatically annotate and retrieve videos containing activities of humans and vehicles, the system must correctly identify their trajectories and relationships even in a complex dynamic environment. Our methodology constructs various joint 3-D models describing possible configurations of humans and vehicles in each image frame and performs maximum-a-posteriori tracking to obtain a sequence of scene states that matches the video. Reliable and view-independent scene state analysis is performed by taking advantage of event context. We focus on the fact that events occurring in a video must contextually coincide with scene states of humans and vehicles. Our experimental results verify that our system using event context is able to analyze and track 3-D scene states of complex human-vehicle interactions more reliably and accurately than previous systems. Copyright © 2010 ACM.
키워드
Dynamic scene analysis, Event context, Scene state tracking
KSP 제안 키워드
Dynamic scene analysis, State analysis, State tracking, Three dimensional(3D), View-independent, complex dynamic environment, maximum a posteriori