ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Deep Learning on MCUs: Comparative Analysis of Compile and Interpreter based Execution Methods
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Gunju Park, Seungtae Hong, Jeong-Si Kim
Issue Date
2023-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2023, pp.1338-1340
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC58733.2023.10393613
Abstract
With the rapid advancements in Edge Deep Learning research, the focus has shifted from optimizing on-device deep learning inference on smartphones and mobile boards, like Nvidia’s Jetson, to executing deep learning models on highly constrained computational resources of a Micro Control Unit (MCU). However, the limited operational resources of these MCU devices pose significant challenges. The Flash ROM memory, which contains the weights of deep learning models, is usually around 1-2MB, while the SRAM memory, used for managing runtime tensors, ranges approximately from 300kB to 1MB. These constraints make conventional on-device inference techniques challenging. This paper offers a comprehensive guide for performing AI inference within the restricted computational confines of an MCU and juxtaposes the efficiency of two runtime methods for implementing deep learning models within the MCU: the Compile-based method and the Interpreter-based method.
KSP Keywords
Comparative analysis, Inference techniques, Micro control unit(MCU), SRAM memory, computational resources, deep learning(DL), deep learning models