ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Reinforcement Learning to Achieve Real-Time Control of Triple Inverted Pendulum
Cited 17 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jongchan Baek, Changhyeon Lee, Young Sam Lee, Soo Jeon, Soohee Han
Issue Date
2024-02
Citation
Engineering Applications of Artificial Intelligence, v.128, pp.1-10
ISSN
0952-1976
Publisher
Elsevier Ltd.
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.engappai.2023.107518
Abstract
This work uses reinforcement learning (RL) to achieve the first-ever data-driven real-time control of an actual, not simulated, triple inverted pendulum (TIP) in a model-free way. A swing-up control task for the TIP is formulated as a Markov decision process with a dense reward function, then conducted in real time by using a model-free RL approach. To increase the sample efficiency of learning, a structure-aware virtual experience replay (VER) method is proposed; it works together with an off-policy actor-critic algorithm. The VER exploits the geometrically-symmetric property of TIPs to create virtual sample trajectories from measured ones, then uses the resulting multifold augmented dataset to effectively train actor and critic networks during the learning process. These structure-infused training data serve to obtain additional information and hence increase the convergence speed of network learning. We combine the proposed VER with a state-of-the-art actor-critic algorithm, and then validate its effectiveness through numerical simulations. Notably, the inclusion of VER amplifies computational efficiency, slashing the requisite trials, training steps, and overall duration by approximately 66.67%. Finally experiments demonstrate the real-time control capability of the proposed approach on an actual TIP system.
KSP Keywords
Actor-critic algorithm, Computational Efficiency, Control Capability, Control task, Data-Driven, Learning Process, Markov Decision process, Model-free, Numerical simulation(Trnsys16), Reinforcement learning(RL), Structure-aware