ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Instance-aware Contrastive Learning for Occluded Human Mesh Reconstruction
Cited 2 time in scopus Download 80 time Share share facebook twitter linkedin kakaostory
Authors
Mi-Gyeong Gwon, Gi-Mun Um, Won-Sik Cheong, Wonjun Kim
Issue Date
2024-06
Citation
Conference on Computer Vision and Pattern Recognition (CVPR) 2024, pp.10553-10562
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/CVPR52733.2024.01004
Abstract
A simple yet effective method for occlusion-robust 3D human mesh reconstruction from a single image is presented in this paper. Although many recent studies have shown the remarkable improvement in human mesh reconstruction, it is still difficult to generate accurate meshes when person-to-person occlusion occurs due to the ambigu-ity of who a body part belongs to. To address this problem, we propose an instance-aware contrastive learning scheme. Specifically, joint features belonging to the target human are trained to be proximate with the center feature (i.e., feature extracted from the body center position). On the other hand, center features of different human instances are forced to be far apart so that joint features of each person can be clearly distinguished from others. By interpreting the joint possession based on such contrastive learning scheme, the proposed method easily understands the spatial occupancy of body parts for each person in a given image, thus can reconstruct reliable human meshes even with severely overlapped cases between multiple persons. Ex-perimental results on benchmark datasets demonstrate the robustness of the proposed method compared to previous approaches under person-to-person occlusions. The code and model are publicly available at: https://github.com/DCVL-3D/InstanceHMR-release.
KSP Keywords
Benchmark datasets, Body parts, Joint features, Mesh reconstruction, Single image, feature extracted, the body