ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 TTL-Based Cache Utility Maximization Using Deep Reinforcement Learning
Cited 2 time in scopus Download 4 time Share share facebook twitter linkedin kakaostory
저자
조충래, 신승재, 전홍석, 윤승현
발행일
202112
출처
Global Communications Conference (GLOBECOM) 2021, pp.1-6
DOI
https://dx.doi.org/10.1109/GLOBECOM46510.2021.9685845
협약과제
21HH4700, 초연결 지능 인프라 원천기술 연구개발, 김선미
초록
Utility-driven caching opened up a new design opportunity for caching algorithms by modeling the admission and eviction control as a utility maximization process with essential support for service differentiation. Nevertheless, there is still to go in terms of adaptability to changing environment. Slow convergence to an optimal state may degrade actual user-experienced utility, which gets even worse in non-stationary scenarios where cache control should be adaptive to time-varying content request traffic. This paper proposes to exploit deep reinforcement learning (DRL) to enhance the adaptability of utility-driven time-to-live (TTL)-based caching. Employing DRL with long short-term memory helps a caching agent learn how it adapts to the temporal correlation of content popularities to shorten the transient-state before the optimal steady-state. In addition, we elaborately design the state and action spaces of DRL to overcome the curse of dimensionality, which is one of the most frequently raised issues in machine learning-based approaches. Experimental results show that policies trained by DRL can outperform the conventional utility-driven caching algorithm under some non-stationary environments where content request traffic changes rapidly.
KSP 제안 키워드
Changing environment, Deep reinforcement learning, Learning-based, Long-short term memory(LSTM), Nonstationary Environment, Optimal state, Reinforcement Learning(RL), Service differentiation, Slow convergence, Temporal Correlation, Time-to-live(TTL)