1.04k likes | 1.08k Views
Explore the importance of interest points and local features in image analysis, including their role in geometric transformations and feature matching. Learn about invariance, applications, and corner detection methods.
E N D
Interest Points Detection CS485/685 Computer Vision Dr. George Bebis
Interest Points Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture).
Why Extract Interest Points? • Corresponding points (or features) between images enable the estimation of parameters describing geometric transforms between the images.
What if we don’t know the correspondences? Need to compare feature descriptors of local patches surrounding interest points ? ( ) ( ) ? = featuredescriptor featuredescriptor
Lots of possibilities (this is a popular research area) Simple option: match square windows around the point State of the art approach: SIFT David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/ ? ( ) ( ) ? = featuredescriptor featuredescriptor What if we don’t know the correspondences?
Invariance Features should be detected despite geometric or photometric changes in the image. Given two transformed versions of the same image, features should be detected in corresponding locations.
How to achieve invariance? • Detector must be invariant to geometric and photometric transformations. 2. Descriptors must be invariant (if matching the descriptions is required).
Applications • Image alignment • 3D reconstruction • Object recognition • Indexing and database retrieval • Object tracking • Robot navigation
Example: Object Recognition occlusion, clutter
Example: Panorama Stitching • How do we combine these two images?
Panorama stitching (cont’d) Step 2: match features Step 1: extract features
Panorama stitching (cont’d) Step 1: extract features Step 2: match features Step 3: align images
What features should we use? Use features with gradients in at least two (significantly) different orientations e.g., corners
What features should we use? (cont’d) (auto-correlation)
Corners • Corners are easier to localize than lines when considering the correspondence problem (aperture problem). A point on a line is hard to match. A corner is easier t t+1 t t+1
Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature has a distinctive description Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion
Mains Steps in Corner Detection 1. For each pixel in the input image, the corner operator is applied to obtain a cornerness measure for this pixel. 2. Threshold cornerness map to eliminate weak corners. 3. Apply non-maximal suppression to eliminate points whose cornerness measure is not larger than the cornerness values of all points within a certain distance.
Corner Types Example of L-junction, Y-junction, T-junction, Arrow-junction, and X-junction corner types
Corner Detection Methods • Contour based • Extract contours and search for maximal curvature or inflexion points along the contour. • Intensity based • Compute a measure that indicates the presence of an interest point directly from gray (or color) values. • Parametric model based • Fit parametric intensity model to the image. • Can provide sub-pixel accuracy but are limited to specific types of interest points (e.g., L-corners).
A contour-based approach:Curvature Scale Space • Object has been segmented • Parametric contour representation: • (x(t), y(t)) g(t,σ): Gaussian curvature
Curvature Scale Space (cont’d) σ G. Bebis, G. Papadourakis and S. Orphanoudakis, "Curvature Scale Space Driven Object Recognition with an Indexing Scheme based on Artificial Neural Networks", Pattern Recognition., Vol. 32, No. 7, pp. 1175-1201, 1999.
A parametric model approach:Zuniga-Haralick Detector • Approximate image function in the neighborhood of the pixel (i,j) by a cubic polynomial. (use SVD to find the coefficients) measure of "cornerness”:
Corner Detection Using Edge Detection? • Edge detectors are not stable at corners. • Gradient is ambiguous at corner tip. • Discontinuity of gradient direction near corner.
Corner Detection Using Intensity: Basic Idea Image gradient has two or more dominant directions near a corner. Shifting a window in anydirection should give a large change in intensity. “flat” region:no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions
Moravec Detector (1977) • Measure intensity variation at (x,y) by shifting a small window (3x3 or 5x5) by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals).
Moravec Detector (1977) • Calculate intensity variation by taking the sum of squares of intensity differences of corresponding pixels in these two windows. 8 directions ∆x, ∆y in {-1,0,1} SW(-1,-1), SW(-1,0), ...SW(1,1)
Moravec Detector (cont’d) • The “cornerness” of a pixel is the minimum intensity variation found over the eight shift directions: Cornerness(x,y) = min{SW(-1,-1), SW(-1,0), ...SW(1,1)} Cornerness Map (normalized) Note response to isolated points!
Moravec Detector (cont’d) • Non-maximal suppression will yield the final corners.
Moravec Detector (cont’d) • Does a reasonable job in • finding the majority of true • corners. • Edge points not in one of the • eight principle directions • will be assigned a relatively • large cornerness value.
Moravec Detector (cont’d) • The response is anisotropic as the intensity variation is only calculated at a discrete set of shifts (i.e., not rotationally invariant)
or 1 in window, 0 outside Gaussian Harris Detector • Improves the Moravec operator by avoiding the use of discrete directions and discrete shifts. • Uses a Gaussian window instead of a square window. C.Harris and M.Stephens. "A Combined Corner and Edge Detector.“ Proceedings of the 4th Alvey Vision Conference: pages 147—151, 1988.
Harris Detector (cont’d) • Using first-order Taylor expansion: Reminder: Taylor expansion
Harris Detector (cont’d) Since
AW(x,y)= Harris Detector (cont’d) 2 x 2 matrix (Hessian or auto-correlation or second moment)
Auto-correlation matrix Describes the gradient distribution (i.e., local structure) inside window! Does not depend on
Harris Detector (cont’d) • General case – use window function: default window function w(x,y) : 1 in window, 0 outside
Harris Detector (cont’d) Gaussian • Harris uses a Gaussian window: w(x,y)=G(x,y,σI) where σI is called the “integration” scale window function w(x,y) :
Harris Detector (cont’d) (min)-1/2 (max)-1/2 Since M is symmetric, we have: We can visualize AW as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R direction of the fastest change direction of the slowest change Ellipse equation:
Harris Detector (cont’d) (min)-1/2 (max)-1/2 • Eigenvectors encode edge direction • Eigenvalues encode edge strength direction of the fastest change direction of the slowest change
Harris Detector (cont’d) 2 “Edge” 2 >> 1 “Corner”1 and 2 are large,1 ~ 2;E increases in all directions Classification of image points using eigenvalues of AW: 1 and 2 are small;SW is almost constant in all directions “Edge” 1 >> 2 “Flat” region 1
Harris Detector (cont’d) % 2 (assuming that 1 > 2)
Harris Detector (cont’d) • To avoid eigenvalue computation, the following response function is used: R(A) = det(A) – k trace2(A) • It can be shown that: R(A) = λ1λ2-k(λ1+ λ2)2
Harris Detector (cont’d) “Edge” R < 0 “Corner”R > 0 |R| small “Edge” R < 0 “Flat” region R(A) = det(A) – k trace2(A) k: is a const, usually between 0.04 and 0.06
Harris Detector (cont’d) σD is called the “differentiation” scale
Harris Detector - Example Compute corner response R