ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Trajectory Prediction using Attentive Visual Features
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Sungchan Oh, Jinyoung Moon
Issue Date
2024-07
Citation
International Conference on Advanced Video and Signal-based Surveillance (AVSS) 2024, pp.1-7
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/AVSS61716.2024.10672596
Abstract
This research paper presents a new method for predicting future trajectories of objects in video. Previous studies have mainly focused on the spatial attributes of objects, such as their bounding boxes or coordinates, while often overlooking visual features of the objects and surroundings. Our approach overcomes this limitation by incorporating visual feature extraction networks with trajectory prediction networks, resulting in a significant improvement in predictive accuracy. We conducted extensive testing on trajectory datasets captured from both first-person and bird’s eye views, which validated our method and demonstrated a notable improvement in prediction accuracy. These results confirm the effectiveness of our integrated visual feature extraction in improving trajectory prediction models and emphasize the importance of considering dynamic relationships between objects and their surroundings for more precise predictions.
KSP Keywords
Bounding Box, First-person, Prediction accuracy, Research paper, Spatial attributes, Visual Feature Extraction, new method, prediction model, predictive accuracy, trajectory prediction