ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Position-aware Location Regression Network for Temporal Video Grounding
Cited 2 time in scopus Download 3 time Share share facebook twitter linkedin kakaostory
저자
김선오, 윤기민, 최진영
발행일
202111
출처
International Conference on Advanced Video and Signal-based Surveillance (AVSS) 2021, pp.1-8
DOI
https://dx.doi.org/10.1109/AVSS52988.2021.9663815
협약과제
21HS4600, (딥뷰-1세부) 실시간 대규모 영상 데이터 이해·예측을 위한 고성능 비주얼 디스커버리 플랫폼 개발, 배유석
초록
The key to successful grounding for video surveillance is to understand a semantic phrase corresponding to important actors and objects. Conventional methods ignore comprehensive contexts for the phrase or require heavy computation for multiple phrases. To understand comprehensive contexts with only one semantic phrase, we propose Position-aware Location Regression Network (PLRN) which exploits position-aware features of a query and a video. Specifically, PLRN first encodes both the video and query using positional information of words and video segments. Then, a semantic phrase feature is extracted from an encoded query with attention. The semantic phrase feature and encoded video are merged and made into a context-aware feature by reflecting local and global contexts. Finally, PLRN predicts start, end, center, and width values of a grounding boundary. Our experiments show that PLRN achieves competitive performance over existing methods with less computation time and memory.
KSP 제안 키워드
Competitive performance, Context-aware feature, Conventional methods, computation time, positional information, video surveillance