ETRI-Knowledge Sharing Plaform



논문 검색
구분 SCI
연도 ~ 키워드


학술지 Detecting User Attention to Video Segments using Interval EEG Features
Cited 8 time in scopus Download 33 time Share share facebook twitter linkedin kakaostory
문진영, 권용진, 박종열, 윤완철
Expert Systems with Applications, v.115, pp.578-592
17HS3600, (1세부) 실시간 대규모 영상 데이터 이해·예측을 위한 고성능 비주얼 디스커버리 플랫폼 개발, 박종열
To manage voluminous viewed videos, which US adults watch at a rate of more than five hours per day on average, an automatic method of detecting highly attended video segments during video viewing is required to access them for fine-grained sharing and rewatching. Most electroencephalography (EEG)-based studies of user state analysis have addressed the recognition of attention-related states in a specific task condition, such as drowsiness during driving, attention during learning, and mental fatigue during task execution. In contrast to attention in a specific task condition, both inattention and normal attention are meaningless to viewers in terms of managing viewed videos, while detecting high attention paid to video segments would make a valuable contribution to an automatic management system of viewed videos based on viewer attention. To the best of our knowledge, this is the first EEG-based study of detecting viewer attention paid to video segments. This study describes how to collect video-induced EEG and attention data for video segments from viewers without bias to specific genres and how to construct a subject-independent detection model for the top 20% of viewer attention. The attention detection model using the proposed interval EEG features from 14 channels achieved the best average F1 score of 39.79% with an average accuracy of 52.96%. Additionally, this paper proposes a channel-based feature selection method that considers both the performances of single-channel models and their physical locations for investigating the group of channels relevant to attention detection. The attention detection models using the interval EEG features from all four or some of the channels located in the fronto-central, parietal, temporal, and occipital lobes of the left hemisphere achieved the best F1 score of 39.60% with an average accuracy of 48.70%. It is shown that these models achieve better performance than models using the features from all four or some of their symmetric channels in the right hemisphere and models using the features from six channels located in the anterior-frontal and frontal lobes of the left and right hemispheres. This paper shows the feasibility of subject-independent and genre-independent attention detection models using a wireless EEG headset with optimized channels; these models can be applied to an intelligent video management system based on viewer attention in real-world scenarios.
KSP 제안 키워드
Attention detection, Automatic method, Detection model, EEG features, EEG headset, Feature selection(FS), Left Hemisphere, Management system, Real-world, Right Hemisphere, Single Channel