ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Action-Driven Visual Object Tracking With Deep Reinforcement Learning
Cited 35 time in scopus Download 16 time Share share facebook twitter linkedin kakaostory
저자
윤상두, 최종원, 유영준, 윤기민, 최진영
발행일
201806
출처
IEEE Transactions on Neural Networks and Learning Systems, v.29 no.6, pp.2239-3352
ISSN
2162-237X
출판사
IEEE
DOI
https://dx.doi.org/10.1109/TNNLS.2018.2801826
협약과제
18HS4600, (딥뷰-1세부) 실시간 대규모 영상 데이터 이해·예측을 위한 고성능 비주얼 디스커버리 플랫폼 개발, 박종열
초록
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
키워드
Deep neural network, reinforcement learning (RL), visual tracking
KSP 제안 키워드
Benchmark datasets, Bounding Box, Competitive performance, Deep neural network(DNN), Deep reinforcement learning, Graphic Processing Unit(GPU), Online adaptation, Partially labeled data, Real-Time, Reinforcement Learning(RL), Semi-Supervised Learning(SSL)