ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술지 A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
Cited 6 time in scopus Download 134 time Share share facebook twitter linkedin kakaostory
유병현, Devarani Devi Ningombam, 이성원, 김현우, 정의석, 한란, 송화전
IEEE Access, v.10, pp.47741-47753
21ZS1100, 자율성장형 복합인공지능 원천기술 연구, 송화전
Although recent years witnessed notable success for a cooperative setting in multi-agent reinforcement learning (MARL), efficient explorations are still challenging primarily due to the complex dynamics of inter-agent interactions constituting the high dimension of action spaces. For an efficient exploration, it is necessary to quantify influences that can represent interactions among agents and use them to obtain more information about the complexity of multi-agent systems. In this paper, we propose a novel influence-seeking exploration (ISE) scheme, which encourages agents to preferably explore action spaces significantly influenced by others and thus helps in speeding up the learning curve. To measure the influence of other agents in action selection, we use the variance of joint action-values with different action sets of agents that obtained by an estimation technique to lessen computation overhead. To this end, we first present an analytical approach inspired by the concept of approximated variance propagation and then apply it to an exploration scheme. We evaluate the proposed exploration method on a set of StarCraft II micromanagement as well as modified predator-prey tasks. Compared to state-of-the-art methods, the proposed method achieved performance improvements of 10% in StarCraft II micromanagement and 50% in modified predator-prey tasks approximately.
KSP 제안 키워드
Analytical Approach, Efficient exploration, Estimation Technique, Explore action, High dimension, Joint action, Multi-agent system(MAS), Reinforcement Learning(RL), StarCraft II, action selection, complex dynamics
본 저작물은 크리에이티브 커먼즈 저작자 표시 - 비영리 - 변경금지 (CC BY NC ND) 조건에 따라 이용할 수 있습니다.
저작자 표시 - 비영리 - 변경금지 (CC BY NC ND)