ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Text-driven Speech Animation with Emotion Control
Cited 1 time in scopus Download 3 time Share share facebook twitter linkedin kakaostory
Authors
Wonseok Chae, Yejin Kim
Issue Date
2020-08
Citation
KSII Transactions on Internet and Information Systems, v.14, no.8, pp.3473-3487
ISSN
1976-7277
Publisher
한국인터넷정보학회
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.3837/tiis.2020.08.018
Project Code
20HH9100, Proactive interation based VRAR virtual human object creation technology, Yoon Yeo Chan
Abstract
In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.
KSP Keywords
Basic emotions, Digital content, Emotion control, Emotional expression, High accuracy, Input parameters, New approach, Real-Time, Scattered data interpolation, Small set, Speech animation