ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Rethinking Group Fisher Pruning for Efficient Label-Free Network Compression
Cited 1 time in scopus Download 26 time Share share facebook twitter linkedin kakaostory
Authors
Jong-Ryul Lee, Yong-Hyuk Moon
Issue Date
2022-11
Citation
British Machine Vision Conference (BMVC) 2022, pp.1-12
Publisher
BMVA 
Language
English
Type
Conference Paper
Abstract
Group Fisher Pruning is a powerful gradient-based channel pruning method for convolutional neural networks. Even though it provides excellent convenience for allocating sparsity over layers with strong effectiveness, the cost of its pruning process is significantly expensive for large neural networks. In addition, it was proposed to handle neural networks having residual connections, but it still cannot handle concatenation-type connections like DenseNet. These drawbacks make Group Fisher Pruning hard to be utilized for somewhat large or complex neural networks. Motivated by them, we propose an improved method based on Group Fisher Pruning for efficiency and applicability. For efficiency, we parameterize the number of pruned channels at each pruning step and demonstrate that it can be a much larger value than one. We devise a formal algorithm for applicability to prune DenseNet-style neural networks. In addition, we devise a knowledge distillation-based channel importance scoring scheme that can work for label-free channel pruning, which is a crucial task for exploiting unlabeled data from edge devices. To demonstrate the superiority of our method, we conduct extensive experiments dealing with label-free channel pruning. Our method prunes neural networks at most two orders of magnitude faster than Group Fisher Pruning with comparable accuracy. It should be noticed that our method does not require any label for pruning and retraining while Group Fisher Pruning does.
KSP Keywords
Convolution neural network(CNN), Edge devices, Improved method, Label-Free, Orders of magnitude, Pruning method, Unlabeled data, knowledge distillation, large neural networks