ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Regularising Neural Networks for Future Trajectory Prediction via Inverse Reinforcement learning Framework
Cited 8 time in scopus Download 16 time Share share facebook twitter linkedin kakaostory
저자
최두섭, 민경욱, 최정단
발행일
202008
출처
IET Computer Vision, v.14 no.5, pp.192-200
ISSN
1751-9632
출판사
IET
DOI
https://dx.doi.org/10.1049/iet-cvi.2019.0546
협약과제
19HS1400, 운전자 주행경험 모사기반 일반도로환경의 자율주행4단계(SAE)를 지원하는 주행판단엔진 개발, 최정단
초록
Predicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel method for training this model. The proposed model is based on an encoder-decoder architecture where the encoder encodes inputs (past trajectory and scene context information), while the decoder produces a future trajectory from the context vector given by the encoder. To make the proposed model better utilise the scene context information, the authors let the encoder predict the positions in the past trajectory and a reward function evaluate the positions along with the scene context information generated by the positions. The reward function, which is simultaneously trained with the proposed model, plays the role of a regulariser for the model during the simultaneous training. The authors evaluate the proposed model on several public benchmark datasets. The experimental results show that the prediction performance of the proposed model is greatly improved by the proposed regularisation method, which outperforms the-state-of-the-art models in terms of accuracy.
KSP 제안 키워드
Benchmark datasets, Context Information, Context vector, Dynamic scene, Encoder and Decoder, Inverse reinforcement learning, Learning framework, Proposed model, Recurrent Neural Network(RNN), Regularisation method, Reinforcement Learning(RL)