ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Taeho Kim, Yanming Wang, Vatshank Chaturvedi, Lokesh Gupta, Seyeon Kim, Yongin Kwon, Sangtae Ha
Issue Date
2024-08
Citation
International Joint Conference on Artificial Intelligence (IJCAI) 2024, pp.6324-6332
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.24963/ijcai.2024/699
Abstract
Fine-tuning pre-trained large language models (LLMs) with limited hardware presents challenges due to GPU memory constraints. Various distributed fine-tuning methods have been proposed to alleviate memory constraints on GPU. However, determining the most effective method for achieving rapid fine-tuning while preventing GPU out-of-memory issues in a given environment remains unclear. To address this challenge, we introduce LLMem, a solution that estimates the GPU memory consumption when applying distributed fine-tuning methods across multiple GPUs and identifies the optimal method. We conduct GPU memory usage estimation prior to fine-tuning, leveraging the fundamental structure of transformer-based decoder models and the memory usage distribution of each method. Experimental results show that LLMem accurately estimates peak GPU memory usage on a single GPU, with error rates of up to 1.6%. Additionally, it shows an average error rate of 3.0% when applying distributed fine-tuning methods to LLMs with more than a billion parameters on multi-GPU setups.
KSP Keywords
Average error rate, Fine-tuning, GPU memory usage, Language Model, Multi-GPU, Multiple GPUs, Tuning method, memory consumption, optimal method, transformer-based