ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Train Throughput Analysis of Distributed Reinforcement Learning
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Sooyoung Jang, Noh-Sam Park
Issue Date
2020-10
Citation
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1189-1192
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICTC49870.2020.9289179
Abstract
Distributed deep reinforcement learning can increase the train throughput, which is defined as the timesteps per second used for training, easily by just adding computing nodes to a cluster, which makes it an essential technique for solving complex problems. The more complicated the virtual learning environment and the policy network become, the more the CPU computing power in the rollout phase and the GPU computing power in the policy update phase is required. Recall that the reinforcement learning iterates the phases of acquiring data through rollout in the virtual learning environment and updating the policy from that data over millions of iterations. In this paper, the train throughput analysis is performed with RLlib and IMPALA on two different problems: CartPole, a simple problem, and Pong, a relatively complex problem. The effects of various scalability metrics, clustering, and observation dimensions on train throughput are analyzed. Throughout the analysis, we show that 1) the train throughput varies significantly according to the scalability metrics, 2) it is vital to monitor the bottleneck in the train throughput and configure the cluster accordingly, and 3) when the GPU computing power is the bottleneck, reducing the observation dimensions can be a great option as the train throughput increases up to 3 times by reducing the dimension from 84 to 42.
KSP Keywords
Computing power, Deep reinforcement learning, GPU computing, Policy network, Policy update, Reinforcement learning(RL), Throughput Analysis, complex problems, virtual learning environment