ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Future Trajectory Prediction via RNN and Maximum Margin Inverse Reinforcement Learning
Cited 13 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Dooseop Choi, Taeg-Hyun An, Kyounghwan Ahn, Jeongdan Choi
Issue Date
2018-12
Citation
International Conference on Machine Learning and Applications (ICMLA) 2018, pp.125-130
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICMLA.2018.00026
Abstract
In this paper, we propose a future trajectory prediction framework based on recurrent neural network (RNN) and maximum margin inverse reinforcement learning (IRL) for the task of predicting future trajectories of agents in dynamic scenes. Given the current position of a target agent and the corresponding static scene information, a RNN is trained to produce the next position, which is the closest to the true next position while maximizing the proposed reward function. The reward function is also trained at the same time to maximize the margin between the rewards from the true next position and its estimate. The reward function plays the role of a regularizer when training the parameters of the proposed network so the trained network is able to reason the next position of the agent much better. We evaluated our model on a public KITTI dataset. Experimental results show that the proposed method significantly improves the prediction accuracy compared to other baseline methods.
KSP Keywords
Dynamic scene, Inverse reinforcement learning, Maximum margin, Prediction accuracy, Prediction framework, Recurrent Neural Network(RNN), Reinforcement Learning(RL), reward function, trajectory prediction