ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Energy Efficient Spiking Neural Network Processing using Approximate Arithmetic Units and Variable Precision Weights
Cited 0 time in scopus Download 0 time Share share facebook twitter linkedin kakaostory
저자
Yi Wang, Hao Zhang, 오광일, 이재진, 고석범
발행일
202112
출처
Journal of Parallel and Distributed Computing, v.158, pp.164-175
ISSN
0743-7315
출판사
Elsevier
DOI
https://dx.doi.org/10.1016/j.jpdc.2021.08.003
협약과제
21HS3700, 경량 RISC-V 기반 초저전력 인텔리전트 엣지 지능형반도체 기술 개발, 구본태
초록
Spiking neural networks (SNNs) have been getting more research attention in recent years as the way they process information is suitable for building neuromorphic systems effectively. However, realizing SNNs on hardware is computationally expensive. To improve their efficiency for hardware implementation, a field-programmable gate array (FPGA) based SNN accelerator architecture is proposed and implemented using approximate arithmetic units. To identify the minimal required bit-width for approximate computation without any performance loss, a variable precision method is utilized to represent weights of the SNN. Unlike the conventional reduced precision method applied to all weights uniformly, the proposed variable precision method allows different bit-widths to represent weights and provide the feasibility of maximizing truncation effort for each weight. Four SNNs adopting different network configurations and training datasets are established to compare the performance of proposed accelerator architecture using the variable precision method with the proposed one using the conventional reduced precision method. Based on the experimental results, more than 40% of the weights require less bit-width when applying the variable precision method instead of the reduced precision method. With the variable precision method, the proposed architecture achieves 28% fewer ALUTs and 29% less power consumption than the proposed one using the reduced precision method.
키워드
Approximate computing, Field programmable gate array, Hardware accelerator, Spiking neural network
KSP 제안 키워드
Accelerator architecture, Approximate computation, Approximate computing, Field Programmable Gate Arrays(FPGA), Hardware Implementation, Hardware accelerator, Network configuration, Network processing, Neuromorphic systems, Performance loss, Power Consumption