ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Cited 0 time in scopus Download 985 time Share share facebook twitter linkedin kakaostory
저자
이제민, 유미선, 권용인, 김태호
발행일
202207
출처
Future Generation Computer Systems, v.132, pp.124-135
ISSN
0167-739X
출판사
Elsevier
DOI
https://dx.doi.org/10.1016/j.future.2022.02.005
협약과제
21HS4700, 인공지능 시스템을 위한 뉴로모픽 컴퓨팅 SW 플랫폼 기술 개발, 김태호
초록
To adopt convolutional neural networks (CNN) for a range of resource-constrained targets, it is necessary to compress the CNN models by performing quantization, whereby precision representation is converted to a lower bit representation. To overcome problems such as sensitivity of the training dataset, high computational requirements, and large time consumption, post-training quantization methods that do not require retraining have been proposed. In addition, to compensate for the accuracy drop without retraining, previous studies on post-training quantization have proposed several complementary methods: calibration, schemes, clipping, granularity, and mixed-precision. To generate a quantized model with minimal error, it is necessary to study all possible combinations of the methods because each of them is complementary and the CNN models have different characteristics. However, an exhaustive or a heuristic search is either too time-consuming or suboptimal. To overcome this challenge, we propose an auto-tuner known as Quantune, which builds a gradient tree boosting model to accelerate the search for the configurations of quantization and reduce the quantization error. We evaluate and compare Quantune with the random, grid, and genetic algorithms. The experimental results show that Quantune reduces the search time for quantization by approximately 36.5× with an accuracy loss of 0.07??0.65% across six CNN models, including the fragile ones (MobileNet, SqueezeNet, and ShuffleNet). To support multiple targets and adopt continuously evolving quantization works, Quantune is implemented on a full-fledged compiler for deep learning as an open-sourced project.
키워드
Deep learning compiler, Model compression, Neural networks, Quantization
KSP 제안 키워드
Accuracy loss, Bit representation, Complementary methods, Computational requirements, Convolution neural network(CNN), Genetic Algorithm, Model compression, Quantization error, Resource-constrained, Search time, Time consumption
본 저작물은 크리에이티브 커먼즈 저작자 표시 - 비영리 - 변경금지 (CC BY NC ND) 조건에 따라 이용할 수 있습니다.
저작자 표시 - 비영리 - 변경금지 (CC BY NC ND)