ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Fast Reinforcement Learning Using Stochastic Shortest Paths for a Mobile Robot
Cited 10 time in scopus Download 1 time Share share facebook twitter linkedin kakaostory
저자
권우영, 서일홍, 이상훈, 조영조
발행일
200710
출처
International Conference on Intelligent Robots and Systems (IROS) 2007, pp.82-87
DOI
https://dx.doi.org/10.1109/IROS.2007.4399040
협약과제
07MI1100, URC를 위한 내장형 컴포넌트 기술개발 및 표준화, 황대환
초록
Reinforcement learning (RL) has been used as a learning mechanism for a mobile robot to learn state-action relations without a priori knowledge of working environment. However, most RL methods usually suffer from slow convergence to learn optimum state-action sequence. In this paper, it is intended to improve a learning speed by compounding an existing Q-learning method with a shortest path finding algorithm. To integrate the shortest path algorithm with Q-learning method, a stochastic state-transition model is used to store a previous observed state, a previous action and a current state. Whenever a robot reaches a goal, a Stochastic Shortest Path(SSP) will be found from the stochastic state-transition model. State-action pairs on the SSP will be counted as more significant in the action selection. Using this learning method, the learning speed will be boosted when compared with classical RL methods. To show the validity of our proposed learning technology, several simulations and experimental results will be illustrated. ©2007 IEEE.
KSP 제안 키워드
Action Sequence, Current state, Learning Speed, Learning methods, Mobile robots, Path finding algorithm, Priori knowledge, Reinforcement Learning(RL), Slow convergence, State-transition model, action selection