1 / 28

Data Mining for Surveillance Applications Suspicious Event Detection

Data Mining for Surveillance Applications Suspicious Event Detection. Dr. Bhavani Thuraisingham. Problems Addressed. Huge amounts of video data available in the security domain Analysis is being done off-line usually using “Human Eyes”

delder
Download Presentation

Data Mining for Surveillance Applications Suspicious Event Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham

  2. Problems Addressed • Huge amounts of video data available in the security domain • Analysis is being done off-line usually using “Human Eyes” • Need for tools to aid human analyst ( pointing out areas in video where unusual activity occurs) • Consider corporate security for a fenced section of sensitive property • The guard suspects there may have been a breach of the perimeter fence at some point during the last 48 hours • They must: • Manually review 48 hours of tape • Consider multiple cameras and camera angles • Distinguish between normal personnel and intruders

  3. Video Data Annotated Video w/ events of interest highlighted User Defined Event of interest Example • Using our proposed system: • Greatly Increase video analysis efficiency

  4. The Semantic Gap • The disconnect between the low-level features a machine sees when a video is input into it and the high-level semantic concepts (or events) a human being sees when looking at a video clip • Low-Level features: color, texture, shape • High-level semantic concepts: presentation, newscast, boxing match

  5. Our Approach • Event Representation • Estimate distribution of pixel intensity change • Event Comparison • Contrast the event representation of different video sequences to determine if they contain similar semantic event content. • Event Detection • Using manually labeled training video sequences to classify unlabeled video sequences

  6. Event Representation • Measures the quantity and type of changes occurring within a scene • A video event is represented as a set of x, y and t intensity gradient histograms over several temporal scales. • Histograms are normalized and smoothed

  7. Event Comparison • Determine if the two video sequences contain similar high-level semantic concepts (events). • Produces a number that indicates how close the two compared events are to one another. • The lower this number is the closer the two events are.

  8. Event Detection • A robust event detection system should be able to • Recognize an event with reduced sensitivity to actor (e.g. clothing or skin tone) or background lighting variation. • Segment an unlabeled video containing multiple events into event specific segments

  9. Labeled Video Events • These events are manually labeled and used to classify unknown events Walking1 Running1 Waving2

  10. Labeled Video Events

  11. Experiment #1 • Problem: Recognize and classify events irrespective of direction (right-to-left, left-to-right) and with reduced sensitivity to spatial variations (Clothing) • “Disguised Events”- Events similar to testing data except subject is dressed differently • Compare Classification to “Truth” (Manual Labeling)

  12. Experiment #1 Disguised Walking 1 Classification: Walking

  13. Experiment #1 Disguised Walking 2 Classification: Walking

  14. Experiment #1 Disguised Running 1 Classification: Running

  15. Experiment #1 Disguised Running 2 Classification: Running

  16. Classifying Disguised Events Disguised Running 3 Classification: Running

  17. Classifying Disguised Events Disguised Waving 1 Classification: Waving

  18. Classifying Disguised Events Disguised Waving 2 Classification: Waving

  19. Classifying Disguised Events

  20. Experiment #1 • This method yielded 100% Precision (i.e. all disguised events were classified correctly). • Not necessarily representative of the general event detection problem. • Future evaluation with more event types, more varied data and a larger set of training and testing data is needed

  21. XML Video Annotation • Using the event detection scheme we generate a video description document detailing the event composition of a specific video sequence • This XML document annotation may be replaced by a more robust computer-understandable format (e.g. the VEML video event ontology language).

  22. Video Analysis Tool • Takes annotation document as input and organizes the corresponding video segment accordingly. • Functions as an aid to a surveillance analyst searching for “Suspicious” events within a stream of video data. • Activity of interest may be defined dynamically by the analyst during the running of the utility and flagged for analysis.

  23. Summary and Directions • We have proposed an event representation, comparison and detection scheme. • Working toward bridging the semantic gap and enabling more efficient video analysis • More rigorous experimental testing of concepts • Refine event classification through use of multiple machine learning algorithm (e.g. neural networks, decision trees, etc…). Experimentally determine optimal algorithm. • Develop a model allowing definition of simultaneous events within the same video sequence • Define an access control model that will allow access to surveillance video data to be restricted based on semantic content of video objects • Biometrics applications • Privacy preserving surveillance

  24. Access Control and Biometrics • Access Control • Control access based on content, association, time etc. • Biometrics • Restrict access based on semantic content of video rather then low-level features • Behavioral type access instead of “fingerprint” • Used in combination with other biometric methods

  25. Privacy Preserving Surveillance - Introduction • A recent survey at Times Square found 500 visible surveillance cameras in the area and a total of 2500 in New York City. • What this essentially means is that, we have scores of surveillance video to be inspected manually by security personnel • We need to carry out surveillance but at the same time ensure the privacy of individuals who are good citizens

  26. System Use Raw video surveillance data Face Detection and Face Derecognizing system Suspicious people found Faces of trusted people derecognized to preserve privacy Suspicious events found Comprehensive security report listing suspicious events and people detected Suspicious Event Detection System Manual Inspection of video data Report of security personnel

  27. System Architecture Input Video Finding location of the face in the image Breakdown input video into sequence of images Perform Segmentation Compare face to trusted and untrusted database Raise an alarm that a potential intruder was detected Potential intruder found Trusted face found Derecognize the face in the image

  28. Acknowledgements • Prof. Latifur Khan • Gal Lavee (Surveillance and access control) • Ryan Layfield (Consultant to project) • Sai Chaitanya (Privacy) • Parveen Pallabi (Biometrics)

More Related