290 likes | 476 Views
Robust Moving Object Detection & Categorization using self-improving classifiers . Omar Javed, Saad Ali & Mubarak Shah. Moving Object Detection & Categorization. Goal Detect moving objects in images and classify them into categories, e.g., humans or vehicles. Motivation
E N D
Robust Moving Object Detection & Categorization using self-improving classifiers Omar Javed, Saad Ali & Mubarak Shah
Moving Object Detection & Categorization • Goal • Detect moving objects in images and classify them into categories, e.g., humans or vehicles. • Motivation • Most monitoring and video understanding systems require knowledge of, location and type of objects in the scene.
Object Classification:Major Approaches • Supervised Classifiers • Adaboost (Viola & Jones), Naive Bayes (Schniederman et al.), SVMs (Papageorgiou & Poggio) • Limitations • Requirement of large number of training examples, 1000000 negative examples for face detection (Zhang et al.). More than 10000 examples used by (Viola & Jones) • Fixed parameters after training. After deployment, parameters are not tunable to best performance in a particular scenario.
Object Classification:Major Approaches • Semi-Supervised Classifiers • Co-training (Levin et al.) • Limitations: • Requirement for collection of large amount of training data, though no need for labels. • Offline training, i.e., Fixed parameters in the testing phase.
Properties of an “Ideal” Object Detection System • Learns both background and object models online with no prior training. • Adapts quickly to changing background and object properties
Overview of the Proposed Approach • In a single boosted framework, • Obtain regions of Interest (ROI) from a background subtraction approach. • Obtain motion and appearance features from the ROI. • Use separate views (motion and appearance features) of the data for online co-training, i.e., • If one set of features confidently predicts a label of an object, then use this label to online update the base classifiers and the boosting parameters. • Use combined view (both features) for classification decisions.
Properties of the Proposed Object Detection Method • Background model is learned online. • Object models are learned offline with a small number of training examples. • The object classifier parameters are continuously updated online using co-training to improve detection rates.
Proposed Object Detection Method Background Models Foreground Models Base Classifiers (Appearance) Appearance Feature Extraction ROIs Color Classifier Edge Classifier Base Classifiers (Motion) Updated parameters Motion Feature Extraction Classification Output Background Background Boosted Classifier Updated weak learners Updated Boosted Parameters Co-Training Decision (if confident prediction by one set)
Current Image from video Output of first level Output of second level Background Detection • First level • Per-pixel Mixture of Gaussian color models • Second Level • Gradient magnitude and gradient direction models • Gradient boundary check • Feedback to first level
Features for Object Classification • Base classifiers learned from global PCA coefficients of appearance and motion templates of Image regions. • Appearance subspace learned by performing PCA separately on a small set of labeled ‘d’ dimensional gradient magnitude images of people and vehicles.
Features for Object Classification • The people and vehicle appearance subspaces are represented by d x m1and d x m2 projection matrices (S1 and S2) respectively. • m1 and m2 are chosen such that the eigenvectors account for 99% of variance in the respective subspaces.
Features for Object Classification • Appearance features for base learners are obtained by projecting each training example ‘r’ in the two subspaces
Features for Object Classification Row 1: Top 3 eigenvectors for person appearance subspace. Row 2: Vehicle appearance subspace
Features for Object Classification • To obtain motion features, person and vehicle motion subspaces (matrices S3 and S4)are constructed from m3 and m4 dimensional person and vehicle examples respectively. • Optical flow is obtained using the method by Lucas and Kanade. • Motion features for base learners are obtained by projecting each training motion example ‘o’ in the two subspaces
Base Classifiers • We use the Bayes Classifier as the base classifier. • Let c1, c2 and c3 represent the person, vehicle and background classes. • Each feature vector component vq ,where q ranges from 1,.., m1+m2+m3+m4 , is used to learn the pdf for each class. • The pdf is represented by a smoothed 1D histogram.
Base Classifiers • The classification decision by the qth base classifier is taken as ci, If
Adaboost • Boosting is a method for combining many base classifiers to come up with a more accurate ‘strong’ classifier. • We use the Adaboost.M1 (Freund and Schapire) to learn the strong classifier, from the initial training data and the base classifiers.
The online co-training Framework • In general co-training requires at least two classifiers trained on independent features for labeling of data. Examples confidently labeled by one classifier are used to train the other. • In our case, individual base classifiers either represent motion or appearance features. • To determine confidence thresholds for each base classifier, we use a validation data set.
The online co-training Framework • For class ci and jth base classifier the confidence threshold, is set to be the highest probability achieved by a negative example, i.e., • All examples in the validation set with probability higher than the threshold are correctly classified.
The online co-training Framework • During the test phase, If more than 20% of the appearance based or motion based classifiers predict the label of an example with the probability higher than the validation threshold, then the example is selected for online update. • Online update is only necessary if the boosted classifier decision has a small or negative margin. • Margin thresholds are also computed from the validation set.
The online co-training Framework • Once an example has been labeled by the co-training mechanism, an online boosting algorithm is used to update the base classifiers and the boosting coefficients.
Initial Training 50 examples of each class All examples scaled to 30x30 vector Validation Set 20 images per class Testing on three sequences Experiments
Experiments • Results on Sequence1.
Experiments • Results on Sequence1. Performance over number of co-trained examples Performance over time
Experiments • Results on Sequence 2.
Experiments • Results on Sequence 2. Performance over number of co-trained examples Performance over time
Experiments • Results on Sequence 3.
Experiments • Results on Sequence 3. Performance over number of co-trained examples Performance over time