ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Video Generation and Synthesis Network for Long-term Video Interpolation
Cited 3 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Nayoung Kim, Jung Kyung Lee, Chae Hwa Yoo, Seunghyun Cho, Je-Won Kang
Issue Date
2018-11
Citation
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA-ASC) 2018, pp.705-709
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.23919/APSIPA.2018.8659743
Abstract
In this paper, we propose a bidirectional synthesis video interpolation technique based on deep learning, using a forward and a backward video generation network and a synthesis network. The forward generation network first extrapolates a video sequence, given the past video frames, and then the backward generation network generates the same video sequence, given the future video frames. Next, a synthesis network fuses the results of the two generation networks to create an intermediate video sequence. To jointly train the video generation and synthesis networks, we define a cost function to approximate the visual quality and the motion of the interpolated video as close as possible to those of the original video. Experimental results show that the proposed technique outperforms the state-of-the art long-term video interpolation model based on deep learning.
KSP Keywords
Cost Function, Interpolation model, Interpolation technique, Video generation, Video interpolation, Video sequences, Visual quality, deep learning(DL), model-based, state-of-The-Art, video frames