ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Integrating Evidences of Independently Developed Face and Speaker Recognition Systems by Using Discrete Probability Density Function
Cited 1 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
이재연, 김도형, 곽근창, 김혜진, 윤호섭
발행일
200708
출처
International Symposium on Robot and Human Interactive Communication (RO-MAN) 2007, pp.667-672
DOI
https://dx.doi.org/10.1109/ROMAN.2007.4415170
협약과제
07MI1100, URC를 위한 내장형 컴포넌트 기술개발 및 표준화, 황대환
초록
User recognition is one of the most fundamental functionalities for intelligent service robots. However, in robot applications, the conditions are far severer compared to the traditional biometric security systems. The robots should be able to recognize users non-intrusively, which confines the available biometric features to face and voice. Also, the robots are expected to recognize users from relatively afar, which inevitably deteriorates the accuracy of each recognition module. In this paper, we tried to improve the overall accuracy by integrating the evidences issued by independently developed face and speaker recognition modules. Each recognition module exhibits different statistical characteristics in representing its confidence of the recognition. Therefore, it is essential to transform the evidences to a normalized form to integrate the results. This paper introduces a novel approach to integrate mutually independent multiple evidences to achieve an improved performance. Typical approach to this problem is to model the statistical characteristics of the evidences by well-known parametric form such as Gaussian. Using Mahalanobis distance is a good example. However, the characteristics of the evidences often do not fit into the parametric models, which results in performance degradation. To overcome this problem, we adopted a discrete PDF that can model the statistical characteristics as it is. To confirm the validity of the proposed method, we used a multi-modal database that consists of 10 registered users and 550 probe data. Each probe data contains face image and voice signal. Face and speaker recognition modules are applied to generate respective evidences. The experiment showed an improvement of 11.27% in accuracy compared to the individual recognizers, which is 2.72% better than the traditional Mahalanobis distance approach. ©2007 IEEE.
KSP 제안 키워드
Biometric features, Face Image, Multi-modal, Novel approach, Overall accuracy, Parametric models, Probability Density Function, Probe Data, Robot applications, Service robots, Statistical characteristics