ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Mobile Convolutional Neural Networks for Facial Expression Recognition
Cited 0 time in scopus Download 10 time Share share facebook twitter linkedin kakaostory
저자
윤창락, 김도현
발행일
202010
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1315-1317
DOI
https://dx.doi.org/10.1109/ICTC49870.2020.9289486
협약과제
20ZS1200, 인간중심의 자율지능시스템 원천기술연구, 김도현
초록
We propose CNN models for facial expression recognition that work well in mobile and embedded devices. Previous studies introduced CNN models for image classification by stacking wider filters in depth to increase accuracy. The deep CNN models improve classification accuracy, but it is difficult to use in mobile devices because of its large parameter size and low responsiveness. We first analyzed the MobileNetV2 for facial expression recognition in mobile devices. After that, we designed CNN models with less than 1 million parameters by adjusting the width and depth of the bottlenecks. We trained the proposed CNN models and other mobile CNN models under the same experimental conditions and reviewed the results. The proposed CNN models have been carefully fine-tuned to use less than 0.5 million parameters. The fine-tuned CNN models achieved an accuracy of 90.3% for 5 classes and 86.8% for 7 classes in the RAF database.
KSP 제안 키워드
Convolution neural network(CNN), Deep CNN, Facial Expression Recognition(FER), Image classification, Mobile and embedded devices, Mobile devices, classification accuracy