ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Multi-Agent Deep Reinforcement Learning using Attentive Graph Neural Architectures for Real-Time Strategy Games
Cited 9 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Won Joon Yun, Sungwon Yi, Joongheon Kim
Issue Date
2021-10
Citation
International Conference on Systems, Man, and Cybernetics (SMC) 2021, pp.1-8
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/SMC52423.2021.9658625
Abstract
In real-time strategy (RTS) game artificial intelligence research, various multi-agent deep reinforcement learning (MADRL) algorithms are widely and actively used nowadays. Most of the research is based on StarCraft II environment because it is the most well-known RTS games in world-wide. In our proposed MADRL-based algorithm, distributed MADRL is fundamentally used that is called QMIX. In addition to QMIX-based distributed computation, we consider state categorization which is a novel preprocessing method for representation of graph attention. Furthermore, self-attention mechanisms are used for identifying the relationship among agents in the form of graphs. Based on these approaches, we propose a categorized state graph attention policy (CSGA-policy). As observed in the performance evaluation of our proposed CSGA-policy with the most well-known StarCraft II simulation environment, our proposed algorithm works well in various settings, as expected.
KSP Keywords
Attention mechanism, Deep reinforcement learning, Distributed Computation, Performance evaluation, RTS games, Real-time strategy games, Reinforcement learning(RL), Simulation Environment, StarCraft II, State graph, World-wide