ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Deep Reinforcement Learning for UAV Trajectory Design Considering Mobile Ground Users
Cited 14 time in scopus Download 105 time Share share facebook twitter linkedin kakaostory
Authors
Wonseok Lee, Young Jeon, Taejoon Kim, Young-Il Kim
Issue Date
2021-12
Citation
Sensors, v.21, no.24, pp.1-13
ISSN
1424-8220
Publisher
MDPI
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.3390/s21248239
Project Code
21NR1100, Development of UAV detection technology based on the combination of noise and image signal, Young-Il Kim
Abstract
A network composed of unmanned aerial vehicles (UAVs), serving as base stations (UAV-BS network), is emerging as a promising component in next-generation communication systems. In the UAV-BS network, the optimal positioning of a UAV-BS is an essential requirement to establish line-of-sight (LoS) links for ground users. A novel deep Q-network (DQN)-based learning model enabling the optimal deployment of a UAV-BS is proposed. Moreover, without re-learning of the model and the acquisition of the path information of ground users, the proposed model presents the optimal UAV-BS trajectory while ground users move. Specifically, the proposed model optimizes the trajectory of a UAV-BS by maximizing the mean opinion score (MOS) for ground users who move to various paths. Furthermore, the proposed model is highly practical because, instead of the locations of individual mobile users, an average channel power gain is used as an input parameter. The accuracy of the proposed model is validated by comparing the results of the model with those of a mathematical optimization solver.
KSP Keywords
Communication system, Deep Q-Network, Deep reinforcement learning, Input parameters, Learning model, Line-Of-Sight(LOS), Mathematical Optimization, Next-generation, Optimal deployment, Power gain, Proposed model
This work is distributed under the term of Creative Commons License (CCL)
(CC BY)
CC BY