ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Towards an efficient dataflow-flexible accelerator by finding optimal dataflows of DNNs
Cited 0 time in scopus Download 229 time Share share facebook twitter linkedin kakaostory
Authors
Hyunjun Kim, Whoi Ree Ha, Yongseok Lee, Dongju Lee, Jongwon Lee, Deumji Woo, Jonghee Yoon, Jemin Lee, Yongin Kwon, Yunheung Paek
Issue Date
2026-03
Citation
Future Generation Computer Systems, v.176, pp.1-10
ISSN
0167-739X
Publisher
Elsevier
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.future.2025.108123
Abstract
This paper proposes a new dataflow-flexible accelerator design that addresses the limitations of existing heterogeneous dataflow accelerator (HDA) for handling the computation of multiple deep neural network (DNN) models. The design offers increased dataflow flexibility and higher efficiency compared to existing works. The accelerator utilizes a fixed set of representative dataflows implemented as operating modes and switches between them dynamically. A design space exploration (DSE) tool is leveraged to evaluate the efficiency of candidate dataflows and determine the optimal number and types of operating modes. Each layer of the target DNN models is assessed with different operating modes to select the optimal mode for each layer. Also, two supplementary optimization techniques are adopted to reduce the overheads from supporting a multitude of dataflows. One optimizes to minimize the number of transitions of dataflows, which incur severe overheads. The other optimizes to maximize the reuse of hardware components associated with supporting multiple dataflows. By identifying the redundant hardware components, the proposed design minimizes the chip area, another aspect where dataflow-flexible accelerators suffer. Experimental results demonstrate that our algorithm achieves greater dataflow flexibility with high efficiency, Compared to HDA, our design is, on average, 34.6 % lower in latency at the cost of 6.4 % area and negligible energy overhead.
KSP Keywords
Chip area, Deep neural network(DNN), Design space exploration, Different operating modes, Higher efficiency, Optimal number, Optimization techniques, accelerator design, energy overhead, flexible accelerator, high efficiency
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY