ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Skeleton-based Action Recognition of People Handling Objects
Cited 16 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
김선오, 윤기민, 박종열, 최진영
발행일
201901
출처
Winter Conference on Applications of Computer Vision (WACV) 2019, pp.61-70
DOI
https://dx.doi.org/10.1109/WACV.2019.00014
협약과제
18HS4600, (딥뷰-1세부) 실시간 대규모 영상 데이터 이해·예측을 위한 고성능 비주얼 디스커버리 플랫폼 개발, 박종열
초록
In visual surveillance systems, it is necessary to recognize the behavior of people handling objects such as a phone, a cup, or a plastic bag. In this paper, to address this problem, we propose a new framework for recognizing object-related human actions by graph convolutional networks using human and object poses. In this framework, we construct skeletal graphs of reliable human poses by selectively sampling the informative frames in a video, which include human joints with high confidence scores obtained in pose estimation. The skeletal graphs generated from the sampled frames represent human poses related to the object position in both the spatial and temporal domains, and these graphs are used as inputs to the graph convolutional networks. Through experiments over an open benchmark and our own data sets, we verify the validity of our framework in that our method outperforms the state-of-the-art method for skeleton-based action recognition.
KSP 제안 키워드
Action recognition, Convolutional networks, Data sets, Human Action, Pose estimation, Spatial and temporal, Surveillance system, human joint, object position, object-related, state-of-The-Art