ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Deep Compression of Convolutional Neural Networks with Low-rank Approximation
Cited 14 time in scopus Download 51 time Share share facebook twitter linkedin kakaostory
Authors
Marcella Astrid, Seung-Ik Lee
Issue Date
2018-08
Citation
ETRI Journal, v.40, no.4, pp.421-434
ISSN
1225-6463
Publisher
한국전자통신연구원 (ETRI)
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.4218/etrij.2018-0065
Abstract
The application of deep neural networks (DNNs) to connect the world with cyber physical systems (CPSs) has attracted much attention. However, DNNs require a large amount of memory and computational cost, which hinders their use in the relatively low-end smart devices that are widely used in CPSs. In this paper, we aim to determine whether DNNs can be efficiently deployed and operated in low-end smart devices. To do this, we develop a method to reduce the memory requirement of DNNs and increase the inference speed, while maintaining the performance (for example, accuracy) close to the original level. The parameters of DNNs are decomposed using a hybrid of canonical polyadic?뱒ingular value decomposition, approximated using a tensor power method, and fine-tuned by performing iterative one-shot hybrid fine-tuning to recover from a decreased accuracy. In this study, we evaluate our method on frequently used networks. We also present results from extensive experiments on the effects of several fine-tuning methods, the importance of iterative fine-tuning, and decomposition techniques. We demonstrate the effectiveness of the proposed method by deploying compressed networks in smartphones.
KSP Keywords
Canonical polyadic, Convolution neural network(CNN), Decomposition techniques, Deep compression, Deep neural network(DNN), Fine-tuning, Low-rank approximation, Power Method, Smart devices, Tuning method, computational cost