ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Integrating Evidences of Independently Developed Face and Speaker Recognition Systems by Using Discrete Probability Density Function
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jae Yeon Lee, Do Hyung Kim, Keun-Chang Kwak, Hye-Jin Kim, Ho-Sub Yoon
Issue Date
2007-08
Citation
International Symposium on Robot and Human Interactive Communication (RO-MAN) 2007, pp.667-672
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ROMAN.2007.4415170
Abstract
User recognition is one of the most fundamental functionalities for intelligent service robots. However, in robot applications, the conditions are far severer compared to the traditional biometric security systems. The robots should be able to recognize users non-intrusively, which confines the available biometric features to face and voice. Also, the robots are expected to recognize users from relatively afar, which inevitably deteriorates the accuracy of each recognition module. In this paper, we tried to improve the overall accuracy by integrating the evidences issued by independently developed face and speaker recognition modules. Each recognition module exhibits different statistical characteristics in representing its confidence of the recognition. Therefore, it is essential to transform the evidences to a normalized form to integrate the results. This paper introduces a novel approach to integrate mutually independent multiple evidences to achieve an improved performance. Typical approach to this problem is to model the statistical characteristics of the evidences by well-known parametric form such as Gaussian. Using Mahalanobis distance is a good example. However, the characteristics of the evidences often do not fit into the parametric models, which results in performance degradation. To overcome this problem, we adopted a discrete PDF that can model the statistical characteristics as it is. To confirm the validity of the proposed method, we used a multi-modal database that consists of 10 registered users and 550 probe data. Each probe data contains face image and voice signal. Face and speaker recognition modules are applied to generate respective evidences. The experiment showed an improvement of 11.27% in accuracy compared to the individual recognizers, which is 2.72% better than the traditional Mahalanobis distance approach. ©2007 IEEE.