ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Text-driven Speech Animation with Emotion Control
Cited 1 time in scopus Download 3 time Share share facebook twitter linkedin kakaostory
저자
채원석, 김예진
발행일
202008
출처
KSII Transactions on Internet and Information Systems, v.14 no.8, pp.3473-3487
ISSN
1976-7277
출판사
한국인터넷정보학회
DOI
https://dx.doi.org/10.3837/tiis.2020.08.018
협약과제
20HH9100, 환경에 반응하는 VRAR 가상 휴먼객체 생성기술개발, 윤여찬
초록
In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.
KSP 제안 키워드
Basic emotions, Digital content, Emotion control, Emotional expression, High accuracy, Input parameters, New approach, Real-Time, Scattered data interpolation, Small set, Speech animation