ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Fast Prototyping of a Deep Neural Network On an FPGA
Cited 1 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
김원종, 전혜강
발행일
202010
출처
International SoC Design Conference (ISOCC) 2020, pp.214-215
DOI
https://dx.doi.org/10.1109/ISOCC50952.2020.9333030
협약과제
20PT1300, 차세대 메모리를 위한 개방형 융합 메모리 솔루션 및 플랫폼 개발, 김원종
초록
This paper describes a prototyping methodology for implementing deep neural network (DNN) models in hardware. From a DNN model developed in C or C++ programming language, we develop a hardware architecture using a SoC virtual platform and verify the functionality using FPGA board. It demonstrates the viability of using FPGAs for accelerating specific applications written in a high-level language. With the use of High-level Synthesis tools provided by Xilinx [3], it is shown to be possible to implement an FPGA design that would run the inference calculations required by the MobileNetV2 [1] Deep Neural Network. With minimal alterations to the C++ code developed for a software implementation of the MobileNetV2 where HDL code could be directly synthesized from the original C++ code, dramatically reducing the complexity of the project. Consequently, when the design was implemented on an FPGA, upwards of 5 times increase in speed was able to be realized when compared to similar processors (ARM7).
키워드
FPGA accelerator, High-Level synthesis, ImageNet, imperas, MobileNetV2, Xilinx Vivado
KSP 제안 키워드
C++ programming, Deep neural network(DNN), FPGA Accelerator, FPGA Board, FPGA design, Fast prototyping, Hardware Architecture, High-Level synthesis, High-level language, Specific applications, Virtual platform