ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Dynamic Subtitle Authoring Method based on Audio Analysis for the Hearing Impaired
Cited 6 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Wootaek Lim, Inseon Jang, Chunghyun Ahn
Issue Date
2014-07
Citation
Computers Helping People with Special Needs, Part 1 (LNCS 8547), pp.53-60
Publisher
Springer
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1007/978-3-319-08596-8_9
Abstract
The broadcasting and the Internet are important parts of modern society that a life without media is now unimaginable. However, hearing impaired people have difficulty in understanding media content due to the loss of audio information. If subtitles are available, subtitling with video can be helpful. In this paper, we propose a dynamic subtitle authoring method based on audio analysis for the hearing impaired. We analyze the audio signal and explore a set of audio features that include STE, ZCR, Pitch and MFCC. Using these features, we align the subtitle with the speech and match extracted speech features to subtitle as different text colors, sizes and thicknesses. Furthermore, it highlights the text via aligning them with the voice and tagging the speaker ID using the speaker recognition. © 2014 Springer International Publishing.
KSP Keywords
Audio Features, Audio information, Audio signal, Hearing impaired people, Speech features, audio analysis, media content, speaker Recognition