ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Reusing Agent’s Representations for Adaptation to Tuned-environment in Fighting Game
Cited 1 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
김대욱, 박성윤, 양성일
발행일
202110
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2021, pp.1120-1124
DOI
https://dx.doi.org/10.1109/ICTC52510.2021.9620988
협약과제
21IH1600, 메타 플레이 인식 기반 지능형 게임 서비스 플랫폼 개발, 양성일
초록
Reinforcement learning agents have been used for one-on-one fighting battlefield or quality assurance (QA) of commercial games. In spite of its performance beyond human-level, it is not easy to apply because actual commercial games are frequently updated. In this paper, we propose a method to adapt reinforcement learning agent to slightly tuned environment by reusing representations of neural network. Agent trained by the proposed method converges at 2.11 million steps, which shows at least 3 times faster than those of fine-tuning and training from scratch to reach the same competence. We also tested larger representation layers with smaller actor-critic ones. Although it fails to train agent, it demonstrates distinct characteristics. Finally, action distributions of fully trained agent for each environment are analyzed. Entire process of adapting to new environment presented in this paper gives insights of game balancing framework to game developers.
KSP 제안 키워드
Actor-Critic, Commercial games, Game balancing, Game developers, Reinforcement Learning(RL), fine-tuning, learning agent, neural network, quality assurance