ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술대회 Train Throughput Analysis of Distributed Reinforcement Learning
Cited 0 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
장수영, 박노삼
International Conference on Information and Communication Technology Convergence (ICTC) 2020, pp.1189-1192
20ZR1100, 자율적으로 연결·제어·진화하는 초연결 지능화 기술 연구, 박준희
Distributed deep reinforcement learning can increase the train throughput, which is defined as the timesteps per second used for training, easily by just adding computing nodes to a cluster, which makes it an essential technique for solving complex problems. The more complicated the virtual learning environment and the policy network become, the more the CPU computing power in the rollout phase and the GPU computing power in the policy update phase is required. Recall that the reinforcement learning iterates the phases of acquiring data through rollout in the virtual learning environment and updating the policy from that data over millions of iterations. In this paper, the train throughput analysis is performed with RLlib and IMPALA on two different problems: CartPole, a simple problem, and Pong, a relatively complex problem. The effects of various scalability metrics, clustering, and observation dimensions on train throughput are analyzed. Throughout the analysis, we show that 1) the train throughput varies significantly according to the scalability metrics, 2) it is vital to monitor the bottleneck in the train throughput and configure the cluster accordingly, and 3) when the GPU computing power is the bottleneck, reducing the observation dimensions can be a great option as the train throughput increases up to 3 times by reducing the dimension from 84 to 42.
KSP 제안 키워드
Computing power, Deep reinforcement learning, GPU computing, Policy network, Policy update, Reinforcement Learning(RL), Throughput Analysis, complex problems, virtual learning environment