ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Reusing Agent's Representations for Adaptation to Tuned-environment in Fighting Game
Cited 1 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Dae-Wook Kim, Sung-Yun Park, Seong-il Yang
Issue Date
2021-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2021, pp.1120-1124
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC52510.2021.9620988
Abstract
Reinforcement learning agents have been used for one-on-one fighting battlefield or quality assurance (QA) of commercial games. In spite of its performance beyond human-level, it is not easy to apply because actual commercial games are frequently updated. In this paper, we propose a method to adapt reinforcement learning agent to slightly tuned environment by reusing representations of neural network. Agent trained by the proposed method converges at 2.11 million steps, which shows at least 3 times faster than those of fine-tuning and training from scratch to reach the same competence. We also tested larger representation layers with smaller actor-critic ones. Although it fails to train agent, it demonstrates distinct characteristics. Finally, action distributions of fully trained agent for each environment are analyzed. Entire process of adapting to new environment presented in this paper gives insights of game balancing framework to game developers.
KSP Keywords
Actor-Critic, Commercial games, Game balancing, Game developers, Reinforcement Learning(RL), fine-tuning, learning agent, neural network, quality assurance