ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Mastering Fighting Game Using Deep Reinforcement Learning With Self-play
Cited 10 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Dae-Wook Kim, Sungyun Park, Seong-il Yang
Issue Date
2020-08
Citation
Conference on Games (CoG) 2020, pp.1-8
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/CoG47356.2020.9231639
Abstract
One-on-one fighting game has played a role as a bridge between board game and real-time simulation game in terms of research on game AI because it needs middle-level computation power with medium-size complexity. In this paper, we propose a method to create fighting game AI agent using deep reinforcement learning with self-play and Monte Carlo Tree Search (MCTS). We also analyze various reinforcement learning configuration such as changes on state vector, reward shaping, and opponent compositions with novel performance metric. Agent trained by the proposed method was evaluated against other AIs. The evaluation result shows that mixing MCTS and self-play in a 1:3 ratio makes it possible to overwhelm other AIs in the game with 94.4% win rate. The fully-trained agent understands the game mechanism so that it waits until being close to enemy and performs actions at the optimal timing.
KSP Keywords
AI Agent, Computation power, Deep reinforcement learning, Fighting game AI, Monte carlo tree search, Optimal timing, Reinforcement Learning(RL), Size complexity, State vector, board games, performance metrics