ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Deep Compression of Convolutional Neural Networks with Low-rank Approximation
Cited 12 time in scopus Download 24 time Share share facebook twitter linkedin kakaostory
저자
마셀라, 이승익
발행일
201808
출처
ETRI Journal, v.40 no.4, pp.421-434
ISSN
1225-6463
출판사
한국전자통신연구원 (ETRI)
DOI
https://dx.doi.org/10.4218/etrij.2018-0065
협약과제
18HR1500, 안전한 무인이동체를 위한 ICT 기반기술 개발, 안재영
초록
The application of deep neural networks (DNNs) to connect the world with cyber physical systems (CPSs) has attracted much attention. However, DNNs require a large amount of memory and computational cost, which hinders their use in the relatively low-end smart devices that are widely used in CPSs. In this paper, we aim to determine whether DNNs can be efficiently deployed and operated in low-end smart devices. To do this, we develop a method to reduce the memory requirement of DNNs and increase the inference speed, while maintaining the performance (for example, accuracy) close to the original level. The parameters of DNNs are decomposed using a hybrid of canonical polyadic?뱒ingular value decomposition, approximated using a tensor power method, and fine-tuned by performing iterative one-shot hybrid fine-tuning to recover from a decreased accuracy. In this study, we evaluate our method on frequently used networks. We also present results from extensive experiments on the effects of several fine-tuning methods, the importance of iterative fine-tuning, and decomposition techniques. We demonstrate the effectiveness of the proposed method by deploying compressed networks in smartphones.
KSP 제안 키워드
Canonical polyadic, Convolution neural network(CNN), Decomposition techniques, Deep compression, Deep neural network(DNN), Low-rank approximation, Power Method, Smart devices, Tuning method, computational cost, cyber physical system(CPS)