ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots
Cited - time in scopus Share share facebook twitter linkedin kakaostory
Authors
Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim
Issue Date
2018-11
Citation
International Conference on Social Robotics (ICSR) 2018: Workshop, pp.1-3
Language
English
Type
Conference Paper
Abstract
Co-speech gestures enhance interaction experiences between humans as well as between humans and robots. Existing robots use rule-based speech-gesture association, but this requires human labor and prior knowledge of experts to be implemented. We present a learning-based co-speech gesture generation that is learned from TED talks. The proposed end-to-end neural network model consists of an encoder for speech text understanding and a decoder to generate a sequence of gestures. The model successfully produces various gestures including iconic, metaphoric, deictic, and beat gestures. We also demonstrate a co-speech gesture with a NAO robot working in real time.
KSP Keywords
End to End(E2E), End-to-end learning, Gesture generation, Human labor, Humanoid Robot, Learning-based, Nao Robot, Neural network model, Real-time, Rule-based, TED talks