ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper 3D Clothed Human Reconstruction from Sparse Multi-View Images
Cited 1 time in scopus Download 101 time Share share facebook twitter linkedin kakaostory
Authors
Jin Gyu Hong, Seung Young Noh, Hee Kyung Lee, Won Sik Cheong, Ju Yong Chang
Issue Date
2024-06
Citation
Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2024, pp.677-687
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/CVPRW63382.2024.00072
Abstract
Clothed human reconstruction based on implicit functions has recently received considerable attention. In this study, we explore the most effective 2D feature fusion method from multi-view inputs experimentally and propose a method utilizing the 3D coarse volume predicted by the network to provide a better 3D prior. We fuse 2D features using an attention-based method to obtain detailed geometric predictions. In addition, we propose depth and color projection networks that predict the coarse depth volume and the coarse color volume from the input RGB images and depth maps, respectively. Coarse depth volume and coarse color volume are used as 3D priors to predict occupancy and texture, respectively. Further, we combine the fused 2D features and 3D features extracted from our 3D prior to predict occupancy and propose a technique to adjust the influence of 2D and 3D features using learnable weights. The effectiveness of our method is demonstrated through qualitative and quantitative comparisons with recent multi-view clothed human reconstruction models.
KSP Keywords
2D features, 2D-3D, 3D feature, Color volume, Depth Map, Feature fusion, Fusion method, Implicit function, RGB image, depth and color, multi-view images