ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Automated optimization for memory‐efficient high‐performance deep neural network accelerators
Cited 8 time in scopus Download 181 time Share share facebook twitter linkedin kakaostory
Authors
HyunMi Kim, Chun-Gi Lyuh, Youngsu Kwon
Issue Date
2020-08
Citation
ETRI Journal, v.42, no.4, pp.505-517
ISSN
1225-6463
Publisher
한국전자통신연구원 (ETRI)
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.4218/etrij.2020-0125
Abstract
The increasing size and complexity of deep neural networks (DNNs) necessitate the development of efficient high-performance accelerators. An efficient memory structure and operating scheme provide an intuitive solution for high-performance accelerators along with dataflow control. Furthermore, the processing of various neural networks (NNs) requires a flexible memory architecture, programmable control scheme, and automated optimizations. We first propose an efficient architecture with flexibility while operating at a high frequency despite the large memory and PE-array sizes. We then improve the efficiency and usability of our architecture by automating the optimization algorithm. The experimental results show that the architecture increases the data reuse; a diagonal write path improves the performance by 1.44×혻on average across a wide range of NNs. The automated optimizations significantly enhance the performance from 3.8×혻to 14.79×혻and further provide usability. Therefore, automating the optimization as well as designing an efficient architecture is critical to realizing high-performance DNN accelerators.
KSP Keywords
Automated optimization, Control scheme, Deep neural network(DNN), High Frequency(HF), High performance, Large memory, Memory architecture, Memory structure, Operating scheme, Optimization algorithm, Wide range
This work is distributed under the term of Korea Open Government License (KOGL)
(Type 4: : Type 1 + Commercial Use Prohibition+Change Prohibition)
Type 4: