ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper A 6.4Gb/s/pin HBM3 Digital PHY with Low-Power, Area Efficient Techniques for Chiplet-Based AI processors in 12-nm CMOS
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jaewoong Choi, Yi-Gyeong Kim, Juyeob Kim, Jaehoon Chung, Young-Deuk Jeon, Min-Hyung Cho, Sujin Park, Jinho Han
Issue Date
2024-11
Citation
Asian Solid-State Circuits Conference (A-SSCC) 2024, pp.1-3
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/A-SSCC60305.2024.10848683
Abstract
Since the emergence of artificial neural networks, we are currently experiencing a wave of transformation centered around generative Al models such as ChatGPT, Gemini, and Copilot. In response to these changes, there is a growing demand for low-power and highspeed memory to facilitate the real-time processing of large-scale data such as images, videos, and texts. In the industry, high-performance AI chips utilizing high-bandwidth memory (HBM) are being developed [1], [2]. Particularly, next-generation AI processors tend to adopt a chiplet-based architecture consisting of HBM and NPUs [3]-[5]. With improvements in HBM performance, the operating frequency of HBM3 has increased by approximately three times compared to the previous HBM2E, rising from 1.2 GHz to 3.2 GHz. As a result, it has become crucial to implement a digital physical layer (digital PHY) capable of processing data at high speeds.
KSP Keywords
Area-Efficient, Artificial Neural Network, High performance, Large-scale Data, Next-generation, Operating frequency, Physical Layer, Real-Time processing, high bandwidth memory(HBM), low power, neural network(NN)