ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Mobile Convolutional Neural Networks for Facial Expression Recognition
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
ChangRak Yoon, DoHyun Kim
Issue Date
2020-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1315-1317
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC49870.2020.9289486
Abstract
We propose CNN models for facial expression recognition that work well in mobile and embedded devices. Previous studies introduced CNN models for image classification by stacking wider filters in depth to increase accuracy. The deep CNN models improve classification accuracy, but it is difficult to use in mobile devices because of its large parameter size and low responsiveness. We first analyzed the MobileNetV2 for facial expression recognition in mobile devices. After that, we designed CNN models with less than 1 million parameters by adjusting the width and depth of the bottlenecks. We trained the proposed CNN models and other mobile CNN models under the same experimental conditions and reviewed the results. The proposed CNN models have been carefully fine-tuned to use less than 0.5 million parameters. The fine-tuned CNN models achieved an accuracy of 90.3% for 5 classes and 86.8% for 7 classes in the RAF database.
KSP Keywords
Convolution neural network(CNN), Deep CNN, Facial Expression Recognition(FER), Image classification, Mobile and embedded devices, Mobile devices, classification accuracy