ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Multi-Modal User Interaction Method Based on Gaze Tracking and Gesture Recognition
Cited 15 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
이희경, 임성용, 이인재, 차지훈, 조동찬, 조선영
발행일
201302
출처
Signal Processing : Image Communication, v.28 no.2, pp.114-126
ISSN
0923-5965
출판사
Elsevier
DOI
https://dx.doi.org/10.1016/j.image.2012.10.007
협약과제
12PR4200, IPTV용 Interactive 시점제어 기술 개발, 차지훈
초록
This paper presents a gaze tracking technology which provides a convenient human-centric interface for multimedia consumption without any wearable device. It enables a user to interact with various multimedia on a large display in distance by tracking user movement and acquiring high resolution eye images. This paper also presents a gesture recognition technology which is helpful to interact with scene descriptions in terms of controlling and rendering scene objects. It is based on Hidden Markov Model and CRF using a commercial depth sensor. And then, this paper shows a collaboration method with those new sensors and MPEG standards in order to achieve interoperability among interactive applications, new user interaction devices and users. © 2012 Elsevier B.V. All rights reserved.
KSP 제안 키워드
Depth sensor, High-resolution, Interaction devices, Multi-modal User Interaction, Tracking technology, Wearable device, gaze tracking, gesture recognition technology, hidden Markov Model, human-centric, interactive applications