ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Energy Efficient Spiking Neural Network Processing using Approximate Arithmetic Units and Variable Precision Weights
Cited 11 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Yi Wang, Hao Zhang, Kwang-Il Oh, Jae-Jin Lee, Seok-Bum Ko
Issue Date
2021-12
Citation
Journal of Parallel and Distributed Computing, v.158, pp.164-175
ISSN
0743-7315
Publisher
Elsevier
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.jpdc.2021.08.003
Abstract
Spiking neural networks (SNNs) have been getting more research attention in recent years as the way they process information is suitable for building neuromorphic systems effectively. However, realizing SNNs on hardware is computationally expensive. To improve their efficiency for hardware implementation, a field-programmable gate array (FPGA) based SNN accelerator architecture is proposed and implemented using approximate arithmetic units. To identify the minimal required bit-width for approximate computation without any performance loss, a variable precision method is utilized to represent weights of the SNN. Unlike the conventional reduced precision method applied to all weights uniformly, the proposed variable precision method allows different bit-widths to represent weights and provide the feasibility of maximizing truncation effort for each weight. Four SNNs adopting different network configurations and training datasets are established to compare the performance of proposed accelerator architecture using the variable precision method with the proposed one using the conventional reduced precision method. Based on the experimental results, more than 40% of the weights require less bit-width when applying the variable precision method instead of the reduced precision method. With the variable precision method, the proposed architecture achieves 28% fewer ALUTs and 29% less power consumption than the proposed one using the reduced precision method.
KSP Keywords
Accelerator architecture, Approximate computation, Field-Programmable Gate Array(FPGA), Hardware implementation, Performance loss, Power Consumption, Process Information, Variable Precision, computationally expensive, energy-efficient, network configuration