200 likes | 345 Views
Context-based object-class recognition and retrieval by generalized correlograms. by J. Amores, N. Sebe and P. Radeva. Discussion led by Qi An Duke University. Outline. Introduction Overview of the approach Image representation Learning and matching Implementation with boosting
E N D
Context-based object-class recognition and retrieval by generalized correlograms by J. Amores, N. Sebe and P. Radeva Discussion led by Qi An Duke University
Outline • Introduction • Overview of the approach • Image representation • Learning and matching • Implementation with boosting • Experimental results • Conclusions
Introduction • Information retrieval from images • Keyword-based • Content-based • Direct comparison • Machine learning models • Constraints • Low burden to the user • Fast learning and testing
Overview of the approach • Image representation with Generalized Correlograms (GCs) • Match homologous parts from training set • Learn the key characteristics (classifier) about these parts and their spatial arrangement • Match the remaining images with the initial model • Re-learn the classifier • Output the learning results
Image representation • Image representation is crucial for learning relevant information efficiently • Pre-processing to obtain the contours • Region segmentation (edge-finding) • Smoothing • The images are represented by a constellation of GCs, each one describing one part of the image (both local and spatial information) • Only informative locations (contour points) are considered
A dense set of all contour points {pj} Sampled reference points where GC descriptors (feature vectors) are extracted {xi} One image is represented with M descriptors localized at {xi}’s. Each contour point is associated with a feature vector lj. The feature vector may contain both local and spatial information. All the values are quantized into several bins. The dimensionality of the GC descriptor is nα×nr×nL (Could be very long and sparse)
Other than the angle of the tangent, the color information can also be used. To provide scale invariance, the radius is normalized by the size of the object.
Learning and matching • Assume an object category has C parts, and each part is modeled with parameters • If the models and parameters are known, a new testing image can be evaluated (i.e. to decide whether an object is present or not) by computing the likelihood.
A testing image is represented with M contextual descriptors, The likelihood that a model context (part) wc is represented by any descriptor in H is given by where is the likelihood that wc is represented by a particular descriptor hi The likelihood that an object (image) Ω is present in H is given by Consider about multiple scales for a testing image. The probability that an object is present in one of the scaled representation of the testing image is given by where s is the index of the scale
Implementation with boosting • To train the parameters of each part of the object, authors apply the AdaBoost with decision stumps. • The weak classifier (decision stumps) A equivalent feature selection process since only a single feature is chosen for one weak classifier.
Some local structure and/or color characteristics are selected
Experimental results • Apply the proposed algorithm to CALTECH dataset with seven categories and three background types. • Approximately half the object’s data set and half of the background’s data set are used for training. • Used pre-specified partition if available, or 5 different random partitions.
Conclusions • A novel type of part-based object representation is proposed • Both local attributes and spatial relationship are considered • The computation complexity is significantly lower than other state-of-the-art graph-based object representation • The method works with weak supervision and only very few manually segmented images are required