ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Deep Reinforcement Learning for QoS-Aware Package Caching in Serverless Edge Computing
Cited 8 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hongseok Jeon, Seungjae Shin, Chunglae Cho, Seunghyun Yoon
Issue Date
2021-12
Citation
Global Communications Conference (GLOBECOM) 2021, pp.1-6
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/GLOBECOM46510.2021.9685449
Abstract
In serverless-enabled edge computing, container startup delay is one of the most critical issues because it violates some quality-of-service (QoS) requirements such as the ultra-low latency response times. Caching critical packages for the container can mitigate the startup delay associated with container instantiation. However, caches consume the memory resource that is highly limited at edge nodes. It means that the package cache must be carefully managed in serverless-enabled edge computing. This paper proposes a deep reinforcement learning (DRL)-based caching algorithm, which efficiently caches critical and popular packages with per-function response time QoS in hierarchical edge clouds. By conducting multi-agent reinforcement learning (MARL) for the caching agents of on-premise edge nodes in conjunction with a global reward that considers both cache hit and QoS violation numbers, the caching agents can be driven to cooperate with each other. The results of simulation demonstrate that the proposed DRL-based caching policy can improve QoS awareness more effectively than baselines. Compared with the LRU and LFU, the rate of violation fell by 18 and 27 percent, respectively.
KSP Keywords
Caching Policy, Critical issues, Deep reinforcement learning, Edge cloud, QoS awareness, Reinforcement Learning(RL), cache hit, caching algorithm, edge computing, edge nodes, memory resource