1 / 32

Unsupervised Modelling of ‘Usual’ Events and Detecting Anomalies in Videos

Unsupervised Modelling of ‘Usual’ Events and Detecting Anomalies in Videos. Advisor: Amitabha Mukerjee Deepak Pathak (10222) Abhijit Sharang (10007). 1. Motivation. Large volume of un-annotated video data Surveillance cameras Automatic behaviour learning Present focus:

kera
Download Presentation

Unsupervised Modelling of ‘Usual’ Events and Detecting Anomalies in Videos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unsupervised Modelling of ‘Usual’ Events and Detecting Anomalies in Videos Advisor: AmitabhaMukerjee Deepak Pathak (10222) AbhijitSharang (10007)

  2. 1. Motivation • Large volume of un-annotated video data • Surveillance cameras • Automatic behaviour learning Present focus: • Traffic Dataset containing abnormal events [Varadrajan, 2009]

  3. 2. “Anomaly” • Anomaly refers to the unusual or rare event occurring in the video • Definition is ambiguous and depends on context Idea: • Learn the “usual” events in the video and use the information to tag the rare events.

  4. 3.1.Overview of Approach • Model the “usual” behaviour of scene using parametric bayesian model • Reconstruct the new behaviour from estimated parametric model • Threshold on the similarity measure of reconstructed and actual distribution

  5. 3.2.Topic Modelling • Given: Document and Vocabulary * Document is histogram over vocabulary • Goal: Identify topics in a given set of Documents Idea: Topics are latent variables Alternate view : • Clustering in topic space • Dimensionality reduction

  6. 3.3.Models in practice • LSA Non-parametric clustering into topics using SVD. • pLSA : Learns probability distribution over fixed number of topics; Graphical model based approach. • LDA : Extension of pLSA with dirichlet prior for topic distribution. Fully generative model.

  7. 4. Vision to NLP : Notations

  8. 5.1.Video Clips • 45 minute video footage of traffic available • 25 frames per second • 4 kinds of anomaly • Divided into clips of fixed size of say 4 – 6 seconds

  9. 5.2.Visual Words • Each frame is 288 x 360 • Frame is divided into 15 x 18 parts, each part containing 400 pixels • Features • Optical flow • Object size • Background subtraction is performed on each frame to obtain the objects in foreground. Features are computed only for these objects • Foreground objects consist of vehicles, pedestrians and cyclists

  10. 5.2.Visual Words (contd..) • Foreground pixels then divided into “big” and “small” blobs (connected components) • Optical flow computed on foreground • Flow vector quantised into 5 values : • Static • Dynamic- up, down, left and right • 15x28x5x2 different “words” obtained

  11. 5.3.Foreground Extraction

  12. 5.4. Optical Flow: Heat Map

  13. 6. Modelling pLSA • Training Dataset: no or very less “anomaly” • Test Dataset: usual + anomalous events Procedure • Learn and from training data • Keeping , estimate on test data • Threshold on likelihood estimate of individual test video clips

  14. 7. Results Demo • 3 clips • 3 different types of anomalies

  15. 8. Action Topics Extracted(histogram over topics on x axis – 20 topics)

  16. 8. Action Topics Extracted • Demo video clips

  17. 8. Results (ROC plot)

  18. 8. Results (PR curve)

  19. 9. Current Limitations • Too much context-dependency modelling. Specific feature design. • Full video clip is declared anomalous.

  20. 9. Future work • Generalised words from HoG and HoF • Localisation of anomaly • Generative model for topics

  21. References • Varadarajan, Jagannadan, and J-M. Odobez. "Topic models for scene analysis and abnormality detection." Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009. • Niebles, Juan Carlos, Hongcheng Wang, and Li Fei-Fei. "Unsupervised learning of human action categories using spatial-temporal words." International Journal of Computer Vision 79.3 (2008): 299-318. • Mahadevan, Vijay, et al. "Anomaly detection in crowded scenes." Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. • Roshtkhari, MehrsanJavan, and Martin D. Levine. "Online Dominant and Anomalous Behavior Detection in Videos.“ • Farneback, Gunnar. "Fast and accurate motion estimation using orientation tensors and parametric motion models." Pattern Recognition, 2000. Proceedings. 15th International Conference on. Vol. 1. IEEE, 2000. • Hofmann, Thomas. "Probabilistic latent semantic indexing." Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1999. • Blei, David M., Andrew Y. Ng, and Michael I. Jordan. "Latent dirichlet allocation." the Journal of machine Learning research 3 (2003): 993-1022.

  22. Extra Slides • About • Background subtraction • Optical Flow • pLSA and its EM • Normalized Likelihood

  23. Background subtraction • Extraction of foreground from image • Frame difference • D(t+1) = | I(x,y,t+1) – I(x,y,t) | • Thresholding on the value to get a binary output • Simplistic approach(can do with extra data but cannot miss any essential element) • Foreground smoothened using median filter

  24. Optical flow example (a) Translation perpendicular to a surface. (b) Rotation about axis perpendicular to image plane. (c) Translation parallel to a surface at a constant distance. (d) Translation parallel to an obstacle in front of a more distant background. Slides from Apratim Sharma’s presentation on optical flow,CS676

  25. Optical flow mathematics • Gradient based optical flow • Basic assumption: • I(x+Δx,y+Δy,t+Δt) = I(x,y,t) • Expanded to get IxVx+IyVy+It = 0 • Sparse flow or dense flow • Dense flow constraint: • Smoothness : motion vectors are spatially smooth • Minimise a global energy function

  26. pLSA • Fixed number of topics : {z1, z2 , z3 ,…,zk }.Each word in the vocabulary is attached with a single topic. • Topics are hidden variables. Used for modelling the probability distribution • Computation • Marginalise over hidden variables • Conditional independence assumption: p(w|z) and p(d|z) are independent of each other

  27. EM Algorithm: Intuition • E-Step • Expectation step where expectation of the likelihood function is calculated with the current parameter values • M-Step • Update the parameters with the calculated posterior probabilities • Find the parameters that maximizes the likelihood function

  28. EM: Formalism

  29. EM in pLSA: E Step • It is the probability that a word w occurring in a document d, is explained by aspect z (based on some calculations)

  30. EM in pLSA: M Step • All these equations use p(z|d,w) calculated in E Step • Converges to local maximum of the likelihood function

  31. Anomaly Detection: Likelihood

  32. Normalized Likelihood: Threshold • Normalized likelihood measure is calculated as follows -

More Related