ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper ABSX: The Chiplet Hyperscale AI Processing Unit for Energy-Efficient High-Performance AI Processing
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Youngsu Kwon
Issue Date
2023-10
Citation
International SoC Design Conference (ISOCC) 2023, pp.1-2
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ISOCC59558.2023.10396520
Abstract
Recent advancement of Large Language Model necessitates high-performance AI processors with specialized architecture for Hyperscale Neural Network. A large amount of energy consumed by an array of AI processors not only degrades the effective performance but exacerbates energy shortage. The energy-efficient high-performance AI processing is becoming critical design factor of AI neural processors as it achieves high performance by highly efficient energy consumption. Chiplet architecture is a viable E2HPA solution as it optimizes the memory access energy required for processing large matrices in transformers. We present the design of HPU (Hyperscale Processing Unit) on 2.5D chiplet architecture integrating dual Neural Processor chiplets and 8 HBM3 chiplets. The design aspects of HPU mainly focus on energy-efficient performance. The NP chiplet is composed of 128 tensor cores where each core includes multi-thread tensor cache exploiting extensive power gating. The 2.5D RDL interposer implements low-capacity, low-energy channels between NP and HBM3 chiplets with 1024 6.4Gbps high-bandwidth interconnections. The HPU design also takes into account the chipet-interposer bonding reliability, thermal stability and etc. for energy optimization.
KSP Keywords
Critical design, Design Aspects, Design factors, Efficient energy consumption, Energy Shortage, Energy consumed, High performance, Highly efficient, Language Model, Low energy, Memory Access