ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Supervised Scene Boundary Detection with Relational and Sequential Information
Cited 2 time in scopus Download 2 time Share share facebook twitter linkedin kakaostory
저자
손정우, 이호재, 곽창욱, 김선중
발행일
202012
출처
International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) 2020, pp.250-258
DOI
https://dx.doi.org/10.1109/WIIAT50758.2020.00037
협약과제
20ZH1200, 초실감 입체공간 미디어·콘텐츠 원천기술연구, 이태진
초록
This paper proposes a novel scene boundary detector by considering different features appropriate for definition changes of scenes according to target services or tasks. In the proposed method, the information in shots is categorized into two groups: relational and sequential information. Relational information is acquired by the multi-layered convolution neural networks by merging and embedding similarity vectors from visual and audio features. Sequential information that contains particular patterns of continuous shots is handled with dual recurrent neural networks. The different definitions of scenes are reflected in the proposed method by supervised parameter estimation with a sampling method. Scene boundaries are rarely observed in video content. Thus, it results in skewed class distribution. The sampling method tries to expand instances in scene boundary using reverse order shots, while it reduces the number of non-boundary shots by variance preserved shot filtering. A focal loss is finally adopted for the training process to lead better parameters from an imbalanced dataset. The proposed method is evaluated with three datasets constructed with real-world movies. We empirically proved that different definitions of scene boundary could affect the performance of scene boundary detection through experiments. The proposed deep neural networks with both relational and sequential information show the ability to handle diverse scene definitions in experiments. With supervised learning, the proposed method can reflect the definition bias in each dataset. As a result, the proposed method shows its effectiveness in handling different types of information and adopting other scene definitions by achieving state-of-the-art performances in two benchmark datasets.
KSP 제안 키워드
Audio Features, Benchmark datasets, Convolution neural network(CNN), Deep neural network(DNN), Parameter estimation, Real-world, Recurrent Neural Network(RNN), Sequential information, Skewed class distribution, Supervised Learning, Video contents