ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation
Cited 38 time in scopus Download 5 time Share share facebook twitter linkedin kakaostory
저자
윤영우, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov
발행일
202211
출처
International Conference on Multimodal Interaction (ICMI) 2022, pp.736-747
DOI
https://dx.doi.org/10.1145/3536221.3558058
초록
This paper reports on the second GENEA Challenge to benchmark data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was rendered to video using a standardised visualisation pipeline and evaluated in several large, crowdsourced user studies. Unlike when comparing different research papers, differences in results are here only due to differences between methods, enabling direct comparison between systems. This year's dataset was based on 18 hours of full-body motion capture, including fingers, of different persons engaging in dyadic conversation. Ten teams participated in the challenge across two tiers: full-body and upper-body gesticulation. For each tier we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal. Our evaluations decouple human-likeness from gesture appropriateness, which previously was a major challenge in the field. The evaluation results are a revolution, and a revelation. Some synthetic conditions are rated as significantly more human-like than human motion capture. To the best of our knowledge, this has never been shown before on a high-fidelity avatar. On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.
KSP 제안 키워드
18 hours, Benchmark data, Data-Driven, Evaluation of data, Generation system, Gesture generation, High-fidelity, Human motion capture, Human-like, Speech Signals, Synthetic conditions