ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 High-Degree Feature for Deep Neural Network based Acoustic Model
Cited 0 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
정훈, 이성주, 박전규
발행일
201812
출처
Workshop on Spoken Language Technology (SLT) 2018, pp.1-5
DOI
https://dx.doi.org/10.1109/SLT.2018.8639524
협약과제
18HS3700, 언어학습을 위한 자유발화형 음성대화처리 원천기술 개발, 이윤근
초록
In this paper, we propose to use high-degree features to improve the discrimination performance of Deep Neural Network (DNN) based acoustic model. Thanks to the successful posterior probability estimation of DNNs for high-dimensional features, high-dimensional acoustic features are commonly considered in DNN-based acoustic models.Even though it is not clear how DNN-based acoustic models estimate the posterior probability robustly, the use of high-dimensional features is based on a theorem that it helps separability of patters. There is another well-known knowledge that high-degree features increase linear separability of nonlinear input features. However, there is little work to exploit high-degree features explicitly in a DNN-based acoustic model. Therefore, in this work, we investigate high-degree features to improve the performance further.In this work, the proposed approach was evaluated on a Wall Street Journal (WSJ) speech recognition domain. The proposed method achieved up to 21.8% error reduction rate for the Eval92 test set by reducing the word error rate from 4.82% to 3.77% when using degree-2 polynomial expansion.
KSP 제안 키워드
DNN-based acoustic model, Deep neural network(DNN), Error reduction, High-dimensional features, Input features, Linear separability, Posterior probability estimation, Reduction rate, Test Set, Wall Street, acoustic features