ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Fast Reinforcement Learning Using Stochastic Shortest Paths for a Mobile Robot
Cited 11 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Woo Young Kwon, Il Hong Suh, Sang Hoon Lee, Young-Jo Cho
Issue Date
2007-10
Citation
International Conference on Intelligent Robots and Systems (IROS) 2007, pp.82-87
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/IROS.2007.4399040
Abstract
Reinforcement learning (RL) has been used as a learning mechanism for a mobile robot to learn state-action relations without a priori knowledge of working environment. However, most RL methods usually suffer from slow convergence to learn optimum state-action sequence. In this paper, it is intended to improve a learning speed by compounding an existing Q-learning method with a shortest path finding algorithm. To integrate the shortest path algorithm with Q-learning method, a stochastic state-transition model is used to store a previous observed state, a previous action and a current state. Whenever a robot reaches a goal, a Stochastic Shortest Path(SSP) will be found from the stochastic state-transition model. State-action pairs on the SSP will be counted as more significant in the action selection. Using this learning method, the learning speed will be boosted when compared with classical RL methods. To show the validity of our proposed learning technology, several simulations and experimental results will be illustrated. ©2007 IEEE.
KSP Keywords
Action Sequence, Current state, Learning Speed, Learning methods, Mobile robots, Path finding algorithm, Priori knowledge, Reinforcement Learning(RL), Slow convergence, State-transition model, action selection