ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Convolution Neural Network based Video Coding Technique using Reference Video Synthesis
Cited 15 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Jung Kyung Lee, Nayoung Kim, Seunghyun Cho, Je-Won Kang
Issue Date
2018-11
Citation
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA-ASC) 2018, pp.505-508
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.23919/APSIPA.2018.8659611
Abstract
In this paper, we propose a novel video coding technique that uses a virtual reference (VR) video frame, synthesized by a convolution neural network (CNN) for an inter-coding. Specifically, an encoder generates a VR frame from a video interpolation CNN (VI-CNN) using two reconstructed pictures, i.e., one from the forward reference frames and the other from the backward reference frames. The VR frame is included into the reference picture lists to exploit further temporal correlation in motion estimation and compensation. It is demonstrated by the experimental results that the proposed technique shows about 1.4% BD-rate reductions over the HEVC reference test model (HM 16.9) as an anchor in a Random Access (RA) coding scenario.
KSP Keywords
Convolution neural network(CNN), Motion estimation(ME), Random Access, Reference frame, Temporal Correlation, Video coding, Video interpolation, estimation and compensation, test model, video frames, video synthesis