530 likes | 537 Views
A Unified Multiresolution Framework for Automatic Target Recognition. Eric Grimson, Alan Willsky, Paul Viola, Jeremy S. De Bonet, and John Fisher. Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology. Outline.
E N D
MIT AI Lab / LIDS A Unified Multiresolution Framework for Automatic Target Recognition Eric Grimson, Alan Willsky, Paul Viola, Jeremy S. De Bonet, and John Fisher Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology
MIT AI Lab / LIDS Outline • Review Multiresolution Analysis Models • MAR (Multiresolution Auto-Regressive) • MNP (Multi-scale Nonparametric) • Applications of MNP Models • Synthesis and Super-Resolution • Segmentation and Multi-Look Registration • Classification/Recognition • Continuing Efforts
MIT AI Lab / LIDS Multiresolution parent vector Parent Vector V(x,y)={} coarse fine
MIT AI Lab / LIDS Compare the Distribution of Parent Vectors
MIT AI Lab / LIDS Formally... • Parent Vector
MIT AI Lab / LIDS Steerable Pyramids Freeman and Simoncelli
MIT AI Lab / LIDS Oriented Wavelet Pyramid
MIT AI Lab / LIDS …for a SAR image
MIT AI Lab / LIDS Capturing Structure (Texture Perspective)
MIT AI Lab / LIDS Synthesis Results
MIT AI Lab / LIDS Synthesis Results
MIT AI Lab / LIDS Ergodic/Stationary • A texture is assumed to be many samples of a single process • Each sample is almost certainly dependent on the other samples • But actual location of the samples does not matter • (Space invariant process).
MIT AI Lab / LIDS Heeger and Bergen
MIT AI Lab / LIDS Heeger and Bergen Texture Synthesis Model
MIT AI Lab / LIDS Analysis Synthesis Sampling Procedure original texture patch synthesized texture patch
MIT AI Lab / LIDS Not quite right...
MIT AI Lab / LIDS Wavelet Representation of Edges Wavelet Transform
MIT AI Lab / LIDS Pyramid Representation
MIT AI Lab / LIDS Conditional Distributions Wavelet Transform
MIT AI Lab / LIDS Probabilistic Model Markov Conditionally Independent Successive Conditioning
MIT AI Lab / LIDS Estimating Conditional Distributions • Non-parametrically
MIT AI Lab / LIDS Outline synthesis sample registration discrimination distribution Likelihood Similarity example image segmentation denoising distribution condition super resolution
MIT AI Lab / LIDS Analysis Synthesis Sampling Procedure synthesized texture patch original texture patch
MIT AI Lab / LIDS Multiresolution progression
MIT AI Lab / LIDS Joint feature occurrence across resolution
MIT AI Lab / LIDS Joint feature occurrence across resolution
MIT AI Lab / LIDS Texture Synthesis Results
MIT AI Lab / LIDS Tie-point determination Multiresolution alignment search Multiresolution texture match: flexible histograms Registration pipeline Inputs are first equalized to remove imaging artifacts A coarse-to-fine alignment search is used to bring the images into registration Tie-point regions, which provide important matching information, are determined Quality of registration is measured by comparing the flexible histogram texture match at the landmark regions.
MIT AI Lab / LIDS Tie-point determination • Distinctive regions provide significant constraint on the correct registration, while more recurrent areas provide little or no useful information. • Localized objects (such as structures or vehicles) match only few locations, thus providing strong constraints on registration. • Extended elements (e.g. roads or tree-lines) match a small area, providing a one dimensional constraint. • Common elements (e.g. grass or forest) match large portions of the image, and provide almost no useful information. • Using the only the most distinctive regions as tie-points, reduces the computational requirements and increases the performance of most registration algorithms. • By determining those regions which have low expected mutual information with other regions in the image, • tie-points are found automatically
MIT AI Lab / LIDS Tie-point examples Using this metric, automatically determined tie-points correspond to the visual landmarks that a human observer would use. Here, only vehicles provide distinct landmarks. When present, roads and buildings provide useful landmarks as well.
MIT AI Lab / LIDS Coarse to fine alignment At fine resolutions the registration objective function has many local maxima, causing gradient based techniques to be highly sensitive to initial “seeding” conditions. At coarser resolutions there are fewer local maxima; however, the global maximum tends to be less accurate. Fine Coarse
MIT AI Lab / LIDS Fine Coarse Coarse-to-Fine Registration In practice, actual data does tend conform to our qualitative assumptions. Coarse resolutions lead to smooth, but inaccurate surfaces, while high resolutions are less smooth, but more accurate.
MIT AI Lab / LIDS Measuring Visual Structure : Flexible Histogram I Parent Vector V(x,y)={} coarse fine At each location in the tie-point region a parent vector is extracted. This vector consists of the multiresolution wavelet decomposition at that location. By measuring the frequency with which locations with similar parent structures occur, a flexible histogram is extracted.
MIT AI Lab / LIDS Measuring Visual Structure : Flexible Histogram II Rtie-point B (x,y)= 8 Rtest parent structure The registration objective function is the difference in visual structure between the tie-points and the corresponding regions, this is measured with the flexible histogram.
MIT AI Lab / LIDS Measuring Visual Structure : Flexible Histogram III A difference measure is acquired by comparing the histogram for the test region, measured with respect to the tie-point, to the the histogram for the tie-point measured with respect to itself. Rlandmark B(,x,y)= 8 Rtie-point 2= (B-B’)2/B B’(x,y)= 3 Rtest
MIT AI Lab / LIDS Example Registration
MIT AI Lab / LIDS Example Registration
MIT AI Lab / LIDS Example Registration
MIT AI Lab / LIDS Statistical target discrimination image analysis distribution IMODEL ITEST likelihood estimator likelihood / similarity When compared against a threshold value, this measure provides a discrimination function; comparison against the likelihoods of distributions from other model images, provides a classification mechanism.
MIT AI Lab / LIDS Flexible histogram By measuring the frequency with which locations with similar parent vectors occur, a flexible histogram is extracted. B (x,y)= 8 IMODEL IMODEL parent vector
MIT AI Lab / LIDS Discrimination via histogram comparison The histogram for the image, measured with respect to the model, is compared to the the histogram for the model measured with respect to itself. IMODEL B(x,y)= 8 IMODEL 2 = (B-B’)2/B B’(x,y)= 3 ITEST
MIT AI Lab / LIDS Flexible Histogram The frequency of locations in the image which have a parent structure, whose components are each within a threshold of the parent vector of some location in the model, is given by: where A difference measure is calculated by taking chi-square difference between each such frequency count in the model and test image which approximates of the Kullbach-Liebler divergence. Similarity can be measured by simply negating the distance.
MIT AI Lab / LIDS Models BMP2-C21 BTR70-C71 T72-132 • Models for target vehicles were generated from example images: • generated from vehicles with different numbers from the target vehicles • only 10 examples, evenly distributed in heading angle • measured at a depression angle of 17degrees (targets were at 15 degrees)
MIT AI Lab / LIDS BTR70-C71 Target vehicles • Five target vehicles were used. • Vehicles which differed from the target class were included as confusion targets. • There were roughly 200 images in each class. BMP2-9563 BMP2-9566 T72-812 T72-S7