ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Position-aware Location Regression Network for Temporal Video Grounding
Cited 4 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Sunoh Kim, Kimin Yun, Jin Young Choi
Issue Date
2021-11
Citation
International Conference on Advanced Video and Signal-based Surveillance (AVSS) 2021, pp.1-8
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/AVSS52988.2021.9663815
Abstract
The key to successful grounding for video surveillance is to understand a semantic phrase corresponding to important actors and objects. Conventional methods ignore comprehensive contexts for the phrase or require heavy computation for multiple phrases. To understand comprehensive contexts with only one semantic phrase, we propose Position-aware Location Regression Network (PLRN) which exploits position-aware features of a query and a video. Specifically, PLRN first encodes both the video and query using positional information of words and video segments. Then, a semantic phrase feature is extracted from an encoded query with attention. The semantic phrase feature and encoded video are merged and made into a context-aware feature by reflecting local and global contexts. Finally, PLRN predicts start, end, center, and width values of a grounding boundary. Our experiments show that PLRN achieves competitive performance over existing methods with less computation time and memory.
KSP Keywords
Competitive performance, Context-aware feature, Conventional methods, computation time, positional information, video surveillance