ETRI-Knowledge Sharing Plaform

로그인

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 AB9: A Neural Processor for Inference Acceleration
Cited 1 time in scopus Download 27 time Share share facebook twitter linkedin kakaostory
저자
조용철, 정재훈, 양정민, 여준기, 김현미, 김찬, 함제석, 최민석, 신경선, 한진호, 권영수
발행일
202008
출처
ETRI Journal, v.42 no.4, pp.491-504
ISSN
1225-6463
출판사
한국전자통신연구원 (ETRI)
DOI
https://dx.doi.org/10.4218/etrij.2020-0134
협약과제
20HS1900, 인공지능프로세서 전문연구실, 권영수
초록
We present AB9, a neural processor for inference acceleration. AB9 consists of a systolic tensor core (STC) neural network accelerator designed to accelerate artificial intelligence applications by exploiting the data reuse and parallelism characteristics inherent in neural networks while providing fast access to large on-chip memory. Complementing the hardware is an intuitive and user-friendly development environment that includes a simulator and an implementation flow that provides a high degree of programmability with a short development time. Along with a 40-TFLOP STC that includes 32k arithmetic units and over 36혻MB of on-chip SRAM, our baseline implementation of AB9 consists of a 1-GHz quad-core setup with other various industry-standard peripheral intellectual properties. The acceleration performance and power efficiency were evaluated using YOLOv2, and the results show that AB9 has superior performance and power efficiency to that of a general-purpose graphics processing unit implementation. AB9 has been taped out in the TSMC 28-nm process with a chip size of 17혻×혻23 mm2. Delivery is expected later this year.
키워드
AI SoC, inference, neural network accelerator
KSP 제안 키워드
28 nm, Acceleration performance, Development environment, Development time, General-purpose graphics processing unit(GPGPU), Graphic Processing Unit(GPU), High degree, Intellectual property(IP), Network Accelerator, Neural networks, Neural processor
본 저작물은 공공누리 제4유형 : 출처표시 + 상업적 이용금지 + 변경금지 조건에 따라 이용할 수 있습니다.
제4유형