ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Speech emotion recognition using convolutional and Recurrent Neural Networks
Cited 209 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
임우택, 장대영, 이태진
발행일
201612
출처
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA-ASC) 2016, pp.1-4
DOI
https://dx.doi.org/10.1109/APSIPA.2016.7820699
협약과제
16MR1100, 채널/객체 융합형 하이브리드 오디오 콘텐츠 제작 및 재생기술 개발, 장대영
초록
With rapid developments in the design of deep architecture models and learning algorithms, methods referred to as deep learning have come to be widely used in a variety of research areas such as pattern recognition, classification, and signal processing. Deep learning methods are being applied in various recognition tasks such as image, speech, and music recognition. Convolutional Neural Networks (CNNs) especially show remarkable recognition performance for computer vision tasks. In addition, Recurrent Neural Networks (RNNs) show considerable success in many sequential data processing tasks. In this study, we investigate the result of the Speech Emotion Recognition (SER) algorithm based on CNNs and RNNs trained using an emotional speech database. The main goal of our work is to propose a SER method based on concatenated CNNs and RNNs without using any traditional hand-crafted features. By applying the proposed methods to an emotional speech database, the classification result was verified to have better accuracy than that achieved using conventional classification methods.
KSP 제안 키워드
Architecture Model, Classification method, Computer Vision(CV), Convolution neural network(CNN), Data processing, Deep architecture, Learning methods, Music recognition, Pattern recognition, Recurrent Neural Network(RNN), Sequential data