ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization
Cited 1 time in scopus Download 137 time Share share facebook twitter linkedin kakaostory
Authors
Suwoong Lee, Yunho Jeon, Seungjae Lee, Junmo Kim
Issue Date
2025-01
Citation
IEEE Access, v.13, pp.12113-12126
ISSN
2169-3536
Publisher
Institute of Electrical and Electronics Engineers Inc.
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1109/ACCESS.2025.3529465
Abstract
In deep learning, the size and complexity of neural networks have been rapidly increased to achieve higher performance. However, this poses a challenge when utilized in resource-limited environments, such as mobile devices, particularly when trying to preserve the network's performance. To address this problem, structured pruning has been widely studied as it effectively reduces the network with little impact on performance. To enhance a model's performance with limited resources, it is crucial to 1) utilize all available resources and 2) maximize performance within these limitations. However, existing pruning methods often require iterations of training and pruning or many experiments to find hyperparameters that satisfy a given budget or forcibly truncate parameters with a given budget, resulting in performance loss. To solve this problem, we propose a novel channel pruning method called Tailored Channel Pruning. Given a target budget (e.g., FLOPs and parameters), our method outputs a tailored network that automatically takes the budget into account during training and satisfies the target budget. During the integrated training and pruning process, our method adaptively controls sparsity regularization and selects important weights that can help maximize the accuracy within the target budget. Through various experiments on the CIFAR-10 and ImageNet datasets, we demonstrate the effectiveness of the proposed method and achieve state-of-the-art accuracy after pruning.
KSP Keywords
Adaptive sparsity, CIFAR-10, Higher performance, Integrated training, Limited resources, Mobile devices, Performance loss, Pruning method, Sparsity regularization, deep learning(DL), model complexity
This work is distributed under the term of Creative Commons License (CCL)
(CC BY NC ND)
CC BY NC ND