ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Efficient Deep Reinforcement Learning under Task Variations via Knowledge Transfer for Drone Control
Cited 0 time in scopus Download 77 time Share share facebook twitter linkedin kakaostory
Authors
Sooyoung Jang, Hyung-Il Kim
Issue Date
2024-06
Citation
ICT EXPRESS, v.10, no.3, pp.576-582
ISSN
2405-9595
Publisher
ELSEVIER
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.icte.2024.04.002
Abstract
Despite the growing interest in using deep reinforcement learning (DRL) for drone control, several challenges remain to be addressed, including issues with generalization across task variations and agent training (which requires significant computational power and time). When the agent's input changes owing to the drone's sensors or mission variations, significant retraining overhead is required to handle the changes in the input data pattern and the neural network architecture to accommodate the input data. These difficulties severely limit their applicability in dynamic real-world environments. In this paper, we propose an efficient DRL method that leverages the knowledge of the source agent to accelerate the training of the target agent under task variations. The proposed method consists of three phases: collecting training data for the target agent using the source agent, supervised pre-training of the target agent, and DRL-based fine-tuning. Experimental validation demonstrated a remarkable reduction in the training time (up to 94.29%), suggesting a potential avenue for the successful and efficient application of DRL in drone control.
KSP Keywords
Computational Power, Deep reinforcement learning, Fine-tuning, Knowledge transfer, Pre-Training, Real-world, Reinforcement learning(RL), Three phase, Training time, data patterns, experimental validation
This work is distributed under the term of Creative Commons License (CCL)
(CC BY NC ND)
CC BY NC ND