ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Co-Speech Gesture Synthesis using Discrete Gesture Token Learning
Cited 2 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Shuhong Lu, Youngwoo Yoon, Andrew Feng
Issue Date
2023-10
Citation
International Conference on Intelligent Robots and Systems (IROS) 2023, pp.1-8
Publisher
IEEE
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/IROS55552.2023.10342027
Abstract
Synthesizing realistic co-speech gestures is an important and yet unsolved problem for creating believable motions that can drive a humanoid robot to interact and communicate with human users. Such capability will improve the impressions of the robots by human users and will find applications in education, training, and medical services. One challenge in learning the co-speech gesture model is that there may be multiple viable gesture motions for the same speech utterance. The deterministic regression methods cannot resolve the conflicting samples and may produce over-smoothed or damped motions. We proposed a two-stage model to address this uncertainty issue in gesture synthesis by modeling the gesture segments as discrete latent codes. Our method utilizes RQVAE in the first stage to learn a discrete codebook consisting of gesture tokens from training data. In the second stage, a two-level autoregressive transformer model is used to learn the prior distribution of residual codes conditioned on input speech context. Since the inference is formulated as token sampling, multiple gesture sequences could be generated given the same speech input using top-k sampling. The quantitative results and the user study showed the proposed method outperforms the previous methods and is able to generate realistic and diverse gesture motions.
KSP Keywords
First stage, Gesture synthesis, Medical Services, Speech input, Top-K, Two-level, Two-stage model, User study, human users, humanoid robot, prior distribution