ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Efficient Spiking Neural Network Training and Inference with Reduced Precision Memory and Computing
Cited 10 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Yi Wang, Karim Shahbazi, Hao Zhang, Kwang-Il Oh, Jae-Jin Lee, Seok-Bum Ko
Issue Date
2019-09
Citation
IET Computers and Digital Techniques, v.13, no.5, pp.397-404
ISSN
1751-8601
Publisher
IET
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1049/iet-cdt.2019.0115
Abstract
In this study, reduced precision operations are investigated in order to improve the speed and energy efficiency of SNN implementation. Instead of using the 32-bit single-precision floating-point format, small floating-point format and fixed-point format are used to represent SNN parameters and to perform SNN operations. The analyses are performed on the training and inference of a leaky integrate-and-fire model-based SNN that is trained and used to classify the handwritten digits in MNIST database. The analysis results show that for SNN inference, the floating-point format with 4-bit exponent and 3-bit mantissa or the fixed-point format with 6-bit integer and 7-bit fraction can be used without any accuracy degradation. For training, a floating-point format with 5-bit exponent and 3-bit mantissa or a fixed-point format with 6-bit integer and 10-bit fraction can be used to obtain full accuracy. The proposed reduced precision formats can be used in SNN hardware accelerator design and the selection between floating-point and fixed-point can be determined by design requirements. A case study of SNN implementation on field-programmable gate array device is performed. With reduced precision numerical formats, memory footprint, computing speed, and resource utilisation are improved. As a result, the energy efficiency of SNN implementation is also improved.
KSP Keywords
Case studies, Design requirements, Energy efficiency, Field-Programmable Gate Array(FPGA), Handwritten digits, Hardware accelerator, Integrate-and-fire model, MNIST database, Single-precision, accelerator design, fixed point