180 likes | 195 Views
Learn about training an end-to-end deep network for anomaly detection using an adversarially learned one-class classifier. The method teaches the concept characterized by positive samples and reconstructs them. Experiments include outlier detection in MNIST and Caltech-256 datasets and video anomaly detection in UCSD Ped2. The network architecture enhances discrimination power. The study opens avenues for exploring other noise types and latent representations. Joint training of networks facilitates collaborative testing and better learning.
E N D
Adversarially Learned One-Class Classifier for Novelty Detection Computer Vision and Pattern Recognition (CVPR) 2018 eadeli@cs.stanford.edu
Problem Statement(One-Class Classification) • Applications: • Novelty Detection • Outlier • Anomaly Training Testing
Challenges No samples to train based on • In reality, the novelty class is • absent during training, • poorly sampled, or • not well defined • Due to the unavailability of data from the novelty class, training an end-to-end deep network is a cumbersome task. Too few samples (highly imbalanced classification) What is novelty?
Learns the concept characterized by the space of all positive samples (novelty Detector) Learns to Reconstruct the positive samples (samples from target class) Method If exposed to negative samples (novelty), Rdecimates/distorts them. X target class training sample Encoder Decoder The two networks compete during training, but collaborate during testing. CNN Class Label Using the two networks at the testing time improves the results.
Method (cont’d) Better discrimination power
Joint Training (Summary) Outlier or novelty sample Similar to denoisingautoencoders (but for a target concept) New concept? Does not know what to do, maps it to unknown distribution D is trained only to detect target samples, not novelty samples Output of R is more separable than the original input images.
Experiments • Outlier Detection (MNIST) • Trained to detect each digit separately • Other digits pose as outliers
Severability of classes (before and after the Remonstration step) Inliers Outliers
Experiments (cont’d) • Outlier Detection (Caltech-256) • Similar to previous works [52], we repeat the procedure three times and use images from n={1; 3; 5} randomly chosen categories as inliers (i.e., target). • Outliers are randomly selected from the “clutter” category, such that each experiment has exactly 50% outliers. 1 inlier category 3 inlier categories 5 inlier categories
Experiments (cont’d) • Video Anomaly Detection (UCSD Ped2) • Frame-level comparisons
Experiments (cont’d) • UCSD Ped2
Results (Cont’d) • UMN Video Anomaly Detection Dataset
Analysis on the convergence conditions Nash Equilibrium
The Reconstructor Network Architecture • Experiment on MNIST dataset, for detecting digit ‘1’ Dimensionality of the latent variable Length of the reject region Restricted by forcing the latent space to follow a Gaussian distribution
Conclusion • We trained the first end-to-end trainable deep network for anomaly detection in images and videos • We trained two networks jointly that compete to train, but collaborate on evaluating any input test sample • Future direction: • Exploration of other types of noise in the reconstructor network • Gaining insights on the latent representation • Networks help each other learn better, any other applications?
Thank You! Any questions