ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 A Performance Comparison of Loss Functions
Cited 2 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
조관태, 노종혁, 김영삼, 조상래
발행일
201910
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2019, pp.1146-1151
DOI
https://dx.doi.org/10.1109/ICTC46691.2019.8939902
협약과제
19HH3900, 고신뢰 지능정보 서비스에서 휴먼(H)-인프라(I)-서비스(S)를 연결하는 Portal Device 보안 기술 개발, 조상래
초록
Generally, the deep neural network learns by way of a loss function, which is an approach to evaluate how well given dataset is predicted on a particular network architecture (or network model). If the prediction deviates too far from real data, a loss function would generate a very large value. Progressively, with the help of some optimization function, the loss function lowers the prediction error by providing the network architecture with information that can control the weights of the network architecture. Thus, the loss functions plays an important role in training the network architecture.Recently, several researchers have studied various loss functions such as Softmax, Modified softmax, Angular softmax, Additive-Margin softmax, Arcface, Center, and Focal losses. In this manuscript, we propose a new and simple loss function that just adds the existing loss functions. In addition, we conduct experiments with the MNIST dataset in order to compare the performance between all loss functions including the proposed and the existing loss functions. Resultingly, the experiments show that the proposed loss function is visibly superior to the ability to classify digit images. The experimental results also indicate that Arcface loss and Additive-Margin loss functions satisfy predefined test accuracy most quickly under two and three dimensional embedding, respectively. The fast learning ability of the both loss functions has the advantage of providing relatively high accuracy even when the number of train data is small.
키워드
deep learning, digit recognition, DNN, loss function, MNIST.
KSP 제안 키워드
Deep neural network(DNN), Fast learning, High accuracy, Learning ability, MNIST Dataset, Network Architecture, Network model, Performance comparison, Prediction error, Real data, Three dimensional(3D)