ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper M3FPU: Multiformat Matrix Multiplication FPU Architectures for Neural Network Computations
Cited 3 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Won Jeon, Yong Cheol Peter Cho, HyunMi Kim, Hyeji Kim, Jaehoon Chung, Juyeob Kim, Miyoung Lee, Chun-Gi Lyuh, Jinho Han, Youngsu Kwon
Issue Date
2022-06
Citation
International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022, pp.150-153
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/AICAS54282.2022.9869984
Abstract
Parallel computing performance on floating-point numbers is one of the most important factors in modern computer systems. The hardware components of floating-point units have the potential to improve parallel performance and resource utilization, however, the existing vector-type multiformat parallel floating-point units cannot take advantage of them. We propose M3FPU, a new matrix-type multiformat floating-point unit that applies an outer product matrix multiplication method to a multiplier tree of floating-point units to increase parallelism and resource utilization by the square. M3FPU utilizes the unused part of the multiplier tree of the existing floating-point unit that is filled with zeros. The proposed M3FPU is implemented on a 12nm silicon process and achieves a 44.17% smaller area compared to the state-of-the-art multiformat floating-point unit architecture when supporting the same number of 8-bit floating-point number parallel operations.
KSP Keywords
Computer systems, Floating point unit, Floating-point numbers, Outer product, Parallel computing, Parallel operations, Parallel performance, Resource utilization, Silicon process, computing performance, matrix multiplication