ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
Cited 987 time in scopus Download 20 time Share share facebook twitter linkedin kakaostory
저자
임준호, 주동규, 배지훈, 김준모
발행일
201707
출처
Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp.7130-7138
DOI
https://dx.doi.org/10.1109/CVPR.2017.754
초록
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
KSP 제안 키워드
Deep neural network(DNN), Fast optimization, Inner Product, Knowledge transfer, Network minimization, Novel technique, Transfer learning, knowledge distillation, two layers