ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현
Cited - time in scopus Download 12 time Share share facebook twitter linkedin kakaostory
저자
송화전, 정호영, 박전규
발행일
201606
출처
말소리와 음성과학, v.8 no.2, pp.22-30
ISSN
2005-8063
출판사
한국음성학회
DOI
https://dx.doi.org/10.13064/KSSS.2016.8.2.023
협약과제
16MS1700, 언어학습을 위한 자유발화형 음성대화처리 원천기술 개발, 이윤근
초록
This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.
KSP 제안 키워드
Higher performance, Local patches, automatic speech recognition(ASR), image recognition, second-order