ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Efficient Spiking Neural Network Training and Inference with Reduced Precision Memory and Computing
Cited 5 time in scopus Download 7 time Share share facebook twitter linkedin kakaostory
저자
Yi Wang, Karim Shahbazi, Hao Zhang, 오광일, 이재진, 고석범
발행일
201909
출처
IET Computers and Digital Techniques, v.13 no.5, pp.397-404
ISSN
1751-8601
출판사
IET
DOI
https://dx.doi.org/10.1049/iet-cdt.2019.0115
협약과제
19HB1600, 경량 RISC-V 기반 초저전력 인텔리전트 엣지 지능형반도체 기술 개발, 이재진
초록
In this study, reduced precision operations are investigated in order to improve the speed and energy efficiency of SNN implementation. Instead of using the 32-bit single-precision floating-point format, small floating-point format and fixed-point format are used to represent SNN parameters and to perform SNN operations. The analyses are performed on the training and inference of a leaky integrate-and-fire model-based SNN that is trained and used to classify the handwritten digits in MNIST database. The analysis results show that for SNN inference, the floating-point format with 4-bit exponent and 3-bit mantissa or the fixed-point format with 6-bit integer and 7-bit fraction can be used without any accuracy degradation. For training, a floating-point format with 5-bit exponent and 3-bit mantissa or a fixed-point format with 6-bit integer and 10-bit fraction can be used to obtain full accuracy. The proposed reduced precision formats can be used in SNN hardware accelerator design and the selection between floating-point and fixed-point can be determined by design requirements. A case study of SNN implementation on field-programmable gate array device is performed. With reduced precision numerical formats, memory footprint, computing speed, and resource utilisation are improved. As a result, the energy efficiency of SNN implementation is also improved.
KSP 제안 키워드
Case studies, Energy Efficiency, Field Programmable Gate Arrays(FPGA), Fixed-point, Floating point, Hardware accelerator, Integrate-and-fire model, MNIST database, Neural network training, Single-precision, accelerator design