ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity
Cited 127 time in scopus Download 22 time Share share facebook twitter linkedin kakaostory
저자
윤영우, 차복, 이주행, 장민수, 이재연, 김재홍, 이기혁
발행일
202011
출처
ACM Transactions on Graphics, v.39 no.6, pp.1-16
ISSN
0730-0301
출판사
ACM
DOI
https://dx.doi.org/10.1145/3414685.3417838
협약과제
20HS2500, 고령 사회에 대응하기 위한 실환경 휴먼케어 로봇 기술 개발, 이재연
초록
For human-like agents, including virtual avatars and social robots, making proper gestures while speaking is crucial in human-agent interaction. Co-speech gestures enhance interaction experiences and make the agents look alive. However, it is difficult to generate human-like gestures due to the lack of understanding of how people gesture. Data-driven approaches attempt to learn gesticulation skills from human demonstrations, but the ambiguous and individual nature of gestures hinders learning. In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures. By incorporating a multimodal context and an adversarial training scheme, the proposed model outputs gestures that are human-like and that match with speech content and rhythm. We also introduce a new quantitative evaluation metric for gesture generation models. Experiments with the introduced metric and subjective human evaluation showed that the proposed gesture generation model is better than existing end-to-end generation models. We further confirm that our model is able to work with synthesized audio in a scenario where contexts are constrained, and show that different gesture styles can be generated for the same speech by specifying different speaker identities in the style embedding space that is learned from videos of various speakers. All the code and data is available at https://github.com/ai4r/Gesture-Generation-from-Trimodal-Context.
KSP 제안 키워드
Adversarial Training, Data-driven approach, Embedding space, End to End(E2E), Generation model, Gesture generation, Human evaluation, Human-Agent Interaction, Human-like, Proposed model, Virtual avatar