ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Automated Optimization for Memory-efficient High-performance Deep Neural Network Accelerators
Cited 7 time in scopus Download 141 time Share share facebook twitter linkedin kakaostory
저자
김현미, 여준기, 권영수
발행일
202008
출처
ETRI Journal, v.42 no.4, pp.505-517
ISSN
1225-6463
출판사
한국전자통신연구원 (ETRI)
DOI
https://dx.doi.org/10.4218/etrij.2020-0125
협약과제
20HS1900, 인공지능프로세서 전문연구실, 권영수
초록
The increasing size and complexity of deep neural networks (DNNs) necessitate the development of efficient high-performance accelerators. An efficient memory structure and operating scheme provide an intuitive solution for high-performance accelerators along with dataflow control. Furthermore, the processing of various neural networks (NNs) requires a flexible memory architecture, programmable control scheme, and automated optimizations. We first propose an efficient architecture with flexibility while operating at a high frequency despite the large memory and PE-array sizes. We then improve the efficiency and usability of our architecture by automating the optimization algorithm. The experimental results show that the architecture increases the data reuse; a diagonal write path improves the performance by 1.44×혻on average across a wide range of NNs. The automated optimizations significantly enhance the performance from 3.8×혻to 14.79×혻and further provide usability. Therefore, automating the optimization as well as designing an efficient architecture is critical to realizing high-performance DNN accelerators.
KSP 제안 키워드
Automated optimization, Control scheme, Deep neural network(DNN), High Frequency(HF), High performance, Large memory, Memory architecture, Memory structure, Operating scheme, Optimization algorithm, Wide range
본 저작물은 공공누리 제4유형 : 출처표시 + 상업적 이용금지 + 변경금지 조건에 따라 이용할 수 있습니다.
제4유형