ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Speech emotion recognition using convolutional and Recurrent Neural Networks
Cited 314 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Wootaek Lim, Daeyoung Jang, Taejin Lee
Issue Date
2016-12
Citation
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA-ASC) 2016, pp.1-4
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/APSIPA.2016.7820699
Abstract
With rapid developments in the design of deep architecture models and learning algorithms, methods referred to as deep learning have come to be widely used in a variety of research areas such as pattern recognition, classification, and signal processing. Deep learning methods are being applied in various recognition tasks such as image, speech, and music recognition. Convolutional Neural Networks (CNNs) especially show remarkable recognition performance for computer vision tasks. In addition, Recurrent Neural Networks (RNNs) show considerable success in many sequential data processing tasks. In this study, we investigate the result of the Speech Emotion Recognition (SER) algorithm based on CNNs and RNNs trained using an emotional speech database. The main goal of our work is to propose a SER method based on concatenated CNNs and RNNs without using any traditional hand-crafted features. By applying the proposed methods to an emotional speech database, the classification result was verified to have better accuracy than that achieved using conventional classification methods.
KSP Keywords
Architecture Model, Classification method, Computer Vision(CV), Convolution neural network(CNN), Data processing, Deep architecture, Learning methods, Music recognition, Pattern recognition, Recognition performance, Sequential data