ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Optimal Resource Allocation Considering Non-Uniform Spatial Traffic Distribution in Ultra-Dense Networks: A Multi-Agent Reinforcement Learning Approach
Cited 12 time in scopus Download 132 time Share share facebook twitter linkedin kakaostory
Authors
Eunjin Kim, Hyun-Ho Choi, Hyungsub Kim, Jeehyeon Na, Howon Lee
Issue Date
2022-02
Citation
IEEE Access, v.10, pp.20455-20464
ISSN
2169-3536
Publisher
IEEE
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1109/ACCESS.2022.3152162
Abstract
Recently, the demand for small cell base stations (SBSs) has been exploding to accommodate the explosive increase in mobile data traffic. In ultra-dense small cell networks (UDSCNs), because the spatial and temporal traffic distributions are significantly disproportionate, the efficient management of the energy consumption of SBSs is crucial. Therefore, we herein propose a multi-agent distributed Q-learning algorithm that maximizes energy efficiency (EE) while minimizing the number of outage users. Through intensive simulations, we demonstrate that the proposed algorithm outperforms conventional algorithms in terms of EE and the number of outage users. Even though the proposed reinforcement learning algorithm has significantly lower computational complexity than the centralized approach, it is shown that it can converge to the optimal solution.
KSP Keywords
Centralized approach, Distributed Q-learning, Energy Efficiency, Learning approach, Lower computational complexity, Mobile data traffic, Non-uniform, Optimal Solution, Optimal resource allocation, Q-learning Algorithm, Reinforcement Learning(RL)
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY