ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Luthier: Bridging Auto-Tuning and Vendor Libraries for Efficient Deep Learning Inference
Cited 0 time in scopus Download 72 time Share share facebook twitter linkedin kakaostory
Authors
Yongin Kwon, Joohyoung Cha, Sehyeon Oh, Misun Yu, Jeman Park, Jemin Lee
Issue Date
2025-09
Citation
International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES) 2025, pp.1-23
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1145/3759916
Abstract
Recent deep learning compilers commonly adopt auto-tuning approaches that search for the optimal kernel configuration in tensor programming from scratch, requiring tens of hours per operation and neglecting crucial optimization factors for parallel computing on asymmetric multicore processors. Meanwhile, hand-optimized inference libraries from hardware vendors provide high performance but lack the flexibility and automation needed for emerging models. To close this gap, we propose Luthier, which significantly narrows the search space by selecting the best kernel from existing inference libraries, and also employs cost model-based profiling to quickly determine the most efficient workload distribution for parallel computing. As a result, Luthier achieves up to 2.0x faster execution on convolution-based vision models and transformer-based language models (BERT, GPT) on both CPUs and GPUs, while reducing average tuning time by 95% compared with ArmNN, AutoTVM, Ansor, ONNXRuntime, and TFLite.
KSP Keywords
High performance, Parallel computing, Search Space, Workload Distribution, asymmetric multicore processors, auto-tuning, best kernel, cost model, deep learning(DL), language models, model-based
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY