ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Cited 2 time in scopus Download 19 time Share share facebook twitter linkedin kakaostory
저자
김태호, 권용인, 이제민, 김태호, 하상태
발행일
202210
출처
European Conference on Computer Vision (ECCV) 2022 (LNCS 13680), pp.651-667
DOI
https://dx.doi.org/10.1007/978-3-031-20044-1_37
협약과제
22HS2300, 인공지능 시스템을 위한 뉴로모픽 컴퓨팅 SW 플랫폼 기술 개발, 김태호
초록
Mobile devices run deep learning models for various purposes, such as image classification and speech recognition. Due to the resource constraints of mobile devices, researchers have focused on either making a lightweight deep neural network (DNN) model using model pruning or generating an efficient code using compiler optimization. Surprisingly, we found that the straightforward integration between model compression and compiler auto-tuning often does not produce the most efficient model for a target device. We propose CPrune, a compiler-informed model pruning for efficient target-aware DNN execution to support an application with a required target accuracy. CPrune makes a lightweight DNN model through informed pruning based on the structural information of subgraphs built during the compiler tuning process. Our experimental results show that CPrune increases the DNN execution speed up to 2.73 × compared to the state-of-the-art TVM auto-tune while satisfying the accuracy requirement.
KSP 제안 키워드
Auto-tune, Deep neural network(DNN), Execution speed, Image classification, Mobile devices, Model compression, Resource constraints, Speed-up, Structural information, Target accuracy, Tuning process