ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Multi-Modal Fusion of Speech-Gesture Using Integrated Probability Density Distribution
Cited 0 time in scopus Download 0 time Share share facebook twitter linkedin kakaostory
Authors
Chi-Geun Lee, Mun-Sung Han
Issue Date
2008-12
Citation
International Symposium on Intelligent Information Technology Application (IITA) 2008, pp.361-364
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/IITA.2008.278
Project Code
08MC2300, Development of an Intelligent Service Technology based on the Personal Life Log, Chang Seok Bae
Abstract
Although speech recognition has been explored extensively and successfully developed, it still encounters serious errors in noisy environments. In such cases, gestures, a by-product of speech, can be used to help interpret the speech. In this paper, we propose a method of multi-modal fusion recognition of speech-gesture using integrated discrete probability density function omit estimated by a histogram. The method is tested with a microphone and a 3-axis accelerator in a real-time experiment. The test has two parts : a method of add-and-accumulate speech and gesture probability density functions respectively, and a more complicated method of creating new probability density function from integrating the two PDF's of speech and gesture. © 2008 IEEE.
KSP Keywords
By-products, Fusion recognition, Probability Density Function, Probability density distribution, multimodal fusion, noisy environments, real-time experiment, speech recognition