ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
Cited 1256 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Junho Yim, Donggyu Joo, Jihoon Bae, Junmo Kim
Issue Date
2017-07
Citation
Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp.7130-7138
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/CVPR.2017.754
Abstract
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
KSP Keywords
Deep neural network(DNN), Fast optimization, Inner Product, Knowledge Distillation, Knowledge transfer, Network minimization, Novel technique, Transfer learning, neural network(NN), two layers