ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Multi-scale and Multi-view Feature Blending for Free-viewpoint Rendering of Dynamic Humans
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Hyunwoo Park, Hyukmin Kwon, Seong Yong Lim, Wonjun Kim
Issue Date
2023-12
Citation
International Conference on Visual Communications and Image Processing (VCIP) 2023, pp.1-5
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/VCIP59821.2023.10402759
Abstract
With the great success in neural radiance fields (NeRF), human-specific NeRF has been actively introduced in recent years. However, such human rendering techniques often have difficulties to recover subtle details of textural surfaces such as wrinkles on clothes, which are mostly generated by complicated human motions. To solve this problem, we propose a new method for human-specific NeRF based on multi-scale and multi-view feature blending. Specifically, multi-scale image features, which are sampled from different views, are combined into the transformer architecture. This blended feature successfully guides the NeRF network to render photo-realistic results by enhancing local details of the human performer. Experimental results on the ZJU-MoCap dataset show that the proposed method outperforms previous methods both in qualitative and quantitative evaluations.
KSP Keywords
Free-Viewpoint rendering, Human motion, Image feature, Multi-scale, Multi-view features, new method, photo-realistic