ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Egocentric 3D Pose Estimation with Temporally Guided Pose Refinement
Cited - time in scopus Share share facebook twitter linkedin kakaostory
Authors
Seongmin Baek, Youn-Hee Gil, Hyunjun Lim
Issue Date
2025-12
Citation
ACM SIGGRAPH Asia (SA) 2025, pp.1-3
Publisher
ACM
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1145/3757374.3771474
Abstract
Unlike single-frame pose estimation, human motion is inherently temporally continuous, and this challenge is further compounded by egocentric views, which suffer from severe self-occlusions and limited FoV, making per-frame inference unstable. We address these issues with a two-stage pipeline: a causal Temporal Convolutional Network (TCN) over the previous T frames predicts a motion-conditioned pose prior, and a single-layer Transformer refines this prior with current egocentric visual features to produce the final 3D pose. This design yields accurate and temporally coherent egocentric pose estimation with minimal latency.
KSP Keywords
3D pose estimation, Convolutional networks, Human motion, Single-layer, Two-Stage, Visual Features