ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Dynamic Subtitle Authoring Method based on Audio Analysis for the Hearing Impaired
Cited 3 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
임우택, 장인선, 안충현
발행일
201407
출처
Computers Helping People with Special Needs, Part 1 (LNCS 8547), pp.53-60
DOI
https://dx.doi.org/10.1007/978-3-319-08596-8_9
협약과제
14MR1400, 감성 기반 사용자 맞춤형 UI/UX 방송시스템 기술개발, 안충현
초록
The broadcasting and the Internet are important parts of modern society that a life without media is now unimaginable. However, hearing impaired people have difficulty in understanding media content due to the loss of audio information. If subtitles are available, subtitling with video can be helpful. In this paper, we propose a dynamic subtitle authoring method based on audio analysis for the hearing impaired. We analyze the audio signal and explore a set of audio features that include STE, ZCR, Pitch and MFCC. Using these features, we align the subtitle with the speech and match extracted speech features to subtitle as different text colors, sizes and thicknesses. Furthermore, it highlights the text via aligning them with the voice and tagging the speaker ID using the speaker recognition. © 2014 Springer International Publishing.
KSP 제안 키워드
Audio Features, Audio information, Audio signal, Hearing impaired people, Speech features, audio analysis, media content, speaker Recognition