ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Conference Paper Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection
Cited 50 time in scopus Download 148 time Share share facebook twitter linkedin kakaostory
Authors
Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee
Issue Date
2021-10
Citation
International Conference on Computer Vision Workshops (ICCVW) 2021, pp.207-214
Language
English
Type
Conference Paper
DOI
https://dx.doi.org/10.1109/ICCVW54120.2021.00028
Abstract
Due to the limited availability of anomaly examples, video anomaly detection is often seen as one-class classification (OCC) problem. A popular way to tackle this problem is by utilizing an autoencoder (AE) trained only on normal data. At test time, the AE is then expected to reconstruct the normal input well while reconstructing the anomalies poorly. However, several studies show that, even with normal data only training, AEs can often start reconstructing anomalies as well which depletes their anomaly detection performance. To mitigate this, we propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data. An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data. This way, the AE is encouraged to produce distinguishable reconstructions for normal and anomalous frames. Extensive experiments and analysis on three challenging video anomaly datasets demonstrate the effectiveness of our approach to improve the basic AEs in achieving superiority against several existing state-of-the-art models.
KSP Keywords
End to End(E2E), One-class classification(OCC), Test Time, Video anomaly detection, detection performance, existing state, state-of-The-Art