ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Multi-Modal Fusion of Speech-Gesture Using Integrated Probability Density Distribution
Cited 0 time in scopus Download 0 time Share share facebook twitter linkedin kakaostory
저자
이지근, 한문성
발행일
200812
출처
International Symposium on Intelligent Information Technology Application (IITA) 2008, pp.361-364
DOI
https://dx.doi.org/10.1109/IITA.2008.278
협약과제
08MC2300, 퍼스널 Life Log기반 지능형 서비스 기술 개발, 배창석
초록
Although speech recognition has been explored extensively and successfully developed, it still encounters serious errors in noisy environments. In such cases, gestures, a by-product of speech, can be used to help interpret the speech. In this paper, we propose a method of multi-modal fusion recognition of speech-gesture using integrated discrete probability density function omit estimated by a histogram. The method is tested with a microphone and a 3-axis accelerator in a real-time experiment. The test has two parts : a method of add-and-accumulate speech and gesture probability density functions respectively, and a more complicated method of creating new probability density function from integrating the two PDF's of speech and gesture. © 2008 IEEE.
KSP 제안 키워드
By-products, Fusion recognition, Probability Density Function, Probability density distribution, multimodal fusion, noisy environments, real-time experiment, speech recognition