ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Future Trajectory Prediction via RNN and Maximum Margin Inverse Reinforcement Learning
Cited 8 time in scopus Download 8 time Share share facebook twitter linkedin kakaostory
저자
최두섭, 안택현, 안경환, 최정단
발행일
201812
출처
International Conference on Machine Learning and Applications (ICMLA) 2018, pp.125-130
DOI
https://dx.doi.org/10.1109/ICMLA.2018.00026
협약과제
18HS1400, 운전자 주행경험 모사기반 일반도로환경의 자율주행4단계(SAE)를 지원하는 주행판단엔진 개발, 최정단
초록
In this paper, we propose a future trajectory prediction framework based on recurrent neural network (RNN) and maximum margin inverse reinforcement learning (IRL) for the task of predicting future trajectories of agents in dynamic scenes. Given the current position of a target agent and the corresponding static scene information, a RNN is trained to produce the next position, which is the closest to the true next position while maximizing the proposed reward function. The reward function is also trained at the same time to maximize the margin between the rewards from the true next position and its estimate. The reward function plays the role of a regularizer when training the parameters of the proposed network so the trained network is able to reason the next position of the agent much better. We evaluated our model on a public KITTI dataset. Experimental results show that the proposed method significantly improves the prediction accuracy compared to other baseline methods.
KSP 제안 키워드
Dynamic scene, Inverse reinforcement learning, Maximum margin, Prediction accuracy, Prediction framework, Recurrent Neural Network(RNN), Reinforcement Learning(RL), reward function, trajectory prediction