ETRI-Knowledge Sharing Plaform

ENGLISH

성과물

논문 검색
구분 SCI
연도 ~ 키워드

상세정보

학술대회 Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection
Cited 23 time in scopus Download 95 time Share share facebook twitter linkedin kakaostory
저자
마셀라, 무함마드, 이승익
발행일
202110
출처
International Conference on Computer Vision Workshops (ICCVW) 2021, pp.207-214
DOI
https://dx.doi.org/10.1109/ICCVW54120.2021.00028
협약과제
21HS1600, 불확실한 지도 기반 실내ㆍ외 환경에서 최종 목적지까지 이동로봇을 가이드할 수 있는 AI 기술 개발, 이재영
초록
Due to the limited availability of anomaly examples, video anomaly detection is often seen as one-class classification (OCC) problem. A popular way to tackle this problem is by utilizing an autoencoder (AE) trained only on normal data. At test time, the AE is then expected to reconstruct the normal input well while reconstructing the anomalies poorly. However, several studies show that, even with normal data only training, AEs can often start reconstructing anomalies as well which depletes their anomaly detection performance. To mitigate this, we propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data. An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data. This way, the AE is encouraged to produce distinguishable reconstructions for normal and anomalous frames. Extensive experiments and analysis on three challenging video anomaly datasets demonstrate the effectiveness of our approach to improve the basic AEs in achieving superiority against several existing state-of-the-art models.
KSP 제안 키워드
End to End(E2E), One-class classification(OCC), Test Time, Video anomaly detection, detection performance, existing state, state-of-The-Art