ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Pixel-based Continuous State Prediction with Perceptual Loss
Cited 1 time in scopus Download 4 time Share share facebook twitter linkedin kakaostory
저자
이동훈, 박준희
발행일
202110
출처
International Conference on Information and Communication Technology Convergence (ICTC) 2021, pp.1117-1119
DOI
https://dx.doi.org/10.1109/ICTC52510.2021.9621189
협약과제
21ZR1100, 자율적으로 연결·제어·진화하는 초연결 지능화 기술 연구, 박준희
초록
Predicting the future is an essential but challenging process because of inherently uncertain model dynamics in our nature. In this paper, a perceptual loss is utilized to improve uncertainty and blurriness of video prediction in a stochastic approach. High dimensionality is reduced to a latent variable using a variation of the beta variational autoencoder. The latent variable contains temporal information from the consecutive state images and utilizes to predict the future states from the past. Perceptual loss is used rather than per-pixel loss between the reconstructed image and the original image to extract high-level image features. The perceptual loss using pre-trained network and KL divergence loss are combined during the training process. The proposed algorithm could show improved reconstruction ability and result in clear future state predictions.
KSP 제안 키워드
Continuous state, High dimensionality, Image feature, KL divergence, Latent Variable, Pixel-based, Reconstruction ability, State Prediction, Stochastic approach, Uncertain model, Video prediction