ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술지 QoS-Aware Workload Distribution in Hierarchical Edge Clouds: A Reinforcement Learning Approach
Cited 13 time in scopus Download 108 time Share share facebook twitter linkedin kakaostory
조충래, 신승재, 전홍석, 윤승현
IEEE Access, v.8, pp.193297-193313
20HH1600, 초연결 지능 인프라 원천기술 연구개발, 김선미
Recently, edge computing is getting attention as a new computing paradigm that is expected to achieve short-delay and high-throughput task offloading for large scale Internet-of-Things (IoT) applications. In edge computing, workload distribution is one of the most critical issues that largely influences the delay and throughput performance of edge clouds, especially in distributed Function-as-a-Service (FaaS) over networked edge nodes. In this paper, we propose the Resource Allocation Control Engine with Reinforcement learning (RACER), which provides an efficient workload distribution strategy to reduce the task response slowdown with per-task response time Quality-of-Service (QoS). First, we present a novel problem formulation with the per-task QoS constraint derived from the well-known token bucket mechanism. Second, we employ a problem relaxation to reduce the overall computation complexity by compromising just a bit of optimality. Lastly, we take the deep reinforcement learning approach as an alternative solution to the workload distribution problem to cope with the uncertainty and dynamicity of underlying environments. Evaluation results show that RACER achieves a significant improvement in terms of per-task QoS violation ratio, average slowdown, and control efficiency, compared to AREA, a state-of-the-art workload distribution method.
KSP 제안 키워드
Control efficiency, Critical issues, Deep reinforcement learning, Distribution problem, Distribution strategy, Edge cloud, Function as a service, High throughput(HTP), Internet of thing(IoT), Large scale Internet, Learning approach
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)