ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Deep Reinforcement Learning for UAV Trajectory Design Considering Mobile Ground Users
Cited 10 time in scopus Download 91 time Share share facebook twitter linkedin kakaostory
저자
이원석, 전영, 김태준, 김영일
발행일
202112
출처
Sensors, v.21 no.24, pp.1-13
ISSN
1424-8220
출판사
MDPI
DOI
https://dx.doi.org/10.3390/s21248239
협약과제
21NR1100, 소음 및 영상신호 결합기반 무인기 검출 기술 개발, 김영일
초록
A network composed of unmanned aerial vehicles (UAVs), serving as base stations (UAV-BS network), is emerging as a promising component in next-generation communication systems. In the UAV-BS network, the optimal positioning of a UAV-BS is an essential requirement to establish line-of-sight (LoS) links for ground users. A novel deep Q-network (DQN)-based learning model enabling the optimal deployment of a UAV-BS is proposed. Moreover, without re-learning of the model and the acquisition of the path information of ground users, the proposed model presents the optimal UAV-BS trajectory while ground users move. Specifically, the proposed model optimizes the trajectory of a UAV-BS by maximizing the mean opinion score (MOS) for ground users who move to various paths. Furthermore, the proposed model is highly practical because, instead of the locations of individual mobile users, an average channel power gain is used as an input parameter. The accuracy of the proposed model is validated by comparing the results of the model with those of a mathematical optimization solver.
KSP 제안 키워드
Communication system, Deep Q-Network, Deep reinforcement learning, Input parameters, Learning model, Line-Of-Sight(LOS), Mathematical Optimization, Next-generation, Optimal deployment, Power gain, Proposed model
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)