ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술지 Source Separation Using Dilated Time-Frequency DenseNet for Music Identification in Broadcast Contents
Cited 12 time in scopus Download 120 time Share share facebook twitter linkedin kakaostory
저자
허운행, 김혜미, 권오욱
발행일
202003
출처
Applied Sciences, v.10 no.5, pp.1-18
ISSN
2076-3417
출판사
MDPI
DOI
https://dx.doi.org/10.3390/app10051727
협약과제
20IH2400, 음악 및 동영상 모니터링을 위한 지능형 마이크로 식별 기술 개발, 박지현
초록
We propose a source separation architecture using dilated time-frequency DenseNet for background music identification of broadcast content. We apply source separation techniques to the mixed signals of music and speech. For the source separation purpose, we propose a new architecture to add a time-frequency dilated convolution to the conventional DenseNet in order to effectively increase the receptive field in the source separation scheme. In addition, we apply different convolutions to each frequency band of the spectrogram in order to reflect the different frequency characteristics of the low-and high-frequency bands. To verify the performance of the proposed architecture, we perform singing-voice separation and music-identification experiments. As a result, we confirm that the proposed architecture produces the best performance in both experiments because it uses the dilated convolution to reflect wide contextual information.
KSP 제안 키워드
Background music, Best performance, Contextual information, Different frequency, Dilated Convolution, Frequency characteristics, High Frequency(HF), Music identification, Receptive field, frequency band, mixed signal
본 저작물은 크리에이티브 커먼즈 저작자 표시 (CC BY) 조건에 따라 이용할 수 있습니다.
저작자 표시 (CC BY)