ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Nonparametric Gesture Labeling from Multi-modal Data
Cited 8 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
장주용
발행일
201409
출처
European Conference on Computer Vision (ECCV) 2014 (LNCS 8925), v.8925, pp.503-517
DOI
https://dx.doi.org/10.1007/978-3-319-16178-5_35
협약과제
14MS3500, 인터랙티브 콘텐츠와 상호작용을 위한 고정밀 모바일 및 파노라믹 360도 다수 사용자 동작 인식 기술 개발, 박지영
초록
We present a new gesture recognition method using multimodal data. Our approach solves a labeling problem, which means that gesture categories and their temporal ranges are determined at the same time. For that purpose, a generative probabilistic model is formalized and it is constructed by nonparametrically estimating multi-modal densities from a training dataset. In addition to the conventional skeletal joint based features, appearance information near the active hand in the RGB image is exploited to capture the detailed motion of fingers. The estimated log-likelihood function is used as the unary term for our Markov random field (MRF) model. The smoothness term is also incorporated to enforce temporal coherence of our model. The labeling results can then be obtained by the efficient dynamic programming technique. Experimental results demonstrate that our method provides effective gesture labeling results for the large-scale gesture dataset. Our method scores 0.8268 in the mean Jaccard index and is ranked 3rd in the gesture recognition track of the ChaLearn Looking at People (LAP) Challenge in 2014.
KSP 제안 키워드
Appearance information, Dynamic programming technique, Generative probabilistic model, Gesture datasets, Gesture recognition, Jaccard index, Markov Random Field, RGB image, Recognition method, Temporal Coherence, large-scale