ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Efficient Deep Reinforcement Learning Framework in Edge Computing Environments
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Wan-Seon Lim, Yeon-Hee Lee
Issue Date
2024-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2024, pp.689-694
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC62082.2024.10827778
Abstract
In this paper, we propose a new framework for deep reinforcement learning in edge computing environments. First, we describe the application of distributed reinforcement learning techniques from previous research to edge computing environments. Based on network delay and the computational capabilities, we propose a framework that enhances the learning efficiency of deep reinforcement learning algorithms. In the proposed framework, the roles of the actors and the learner, running on the edge and cloud servers respectively, are dynamically adjusted based on available resources and network latency. The performance of the proposed framework is validated through experiments on a testbed.
KSP Keywords
Cloud server, Deep reinforcement learning, Edge Computing, Learning framework, Network Delay, Network latency, Reinforcement learning(RL), computational capabilities, learning algorithm, learning efficiency