1 / 39

Introduction to Local Feature Detection in Computer Vision

This lecture provides an introduction to local feature detection techniques such as corner detection and blob detection. It discusses the properties of local features and their applications in image search, recognition, 3D reconstruction, and tracking. The lecture also covers the process of keypoint matching for search and panorama stitching.

mfore
Download Presentation

Introduction to Local Feature Detection in Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 1674: Intro to Computer VisionLocal Feature Detection Prof. Adriana KovashkaUniversity of Pittsburgh September 19, 2016

  2. Announcements • HW1P, HW2W graded

  3. Notes on HW2W • Graded with some comments on submission itself  Please read my feedback • I didn’t respond to you on the last 2 questions, my silence does not mean you’re correct • Try to be concise, without failing to give me the answer • If you have 12 or less, you should be concerned • The book is often a useful resource • Laplacian vs Sobel (hw2w.m)

  4. Note on assignments • Lots of questions about “is it ok if I do this” • You don’t have to worry so much about doing exactly what I said, as long as you do the core task of the assignment! • After HW1P, it doesn’t matter how you print, or whether you put multiple figures in one, etc.

  5. HW2P post-mortem • How long did it take? • What took the most time? • What was most fun? • What was most annoying? • Enter Socrative

  6. * = = * Separability example filter image 2D filtering(center location only) The filter factorsinto an outer product of 1D filters: Perform filteringalong rows: Followed by filteringalong the remaining column: Kristen Grauman

  7. Plan for today • Feature detection / keypoint extraction • Corner detection • Blob detection • Start: Feature description (of detected features)

  8. Problems with pixel representation • Not invariant to small changes • Translation • Illumination • etc. • Some parts of an image are more important than others • What do we want to represent?

  9. Local features • Local means that they only cover a small part of the image • There will be many local features detected in an image • Later we’ll talk about how to use those to compute a representation of the whole image • Local features usually exploit image gradients, rarely color (we’ll revisit this statement with CNNs)

  10. Local features: desired properties • Locality • A feature occupies a relatively small area of the image; robust to clutter and occlusion • Repeatability and flexibility • The same feature can be found in several images despite geometric, photometric transformations • Robustness to expected variations • Maximize correct matches • Distinctiveness • Each feature has a distinctive description • Minimize wrong matches • Compactness and efficiency • Many fewer features than image pixels Adapted from K. Grauman and D.Hoiem

  11. Interest(ing) points • Note: “interest points” = “keypoints”, also sometimes called “features” • Many applications • Image search: which points would allow us to match images between query and database? • Recognition: which patches are likely to tell us something about object category? • 3D reconstruction: how to find correspondences across different views? • Tracking: which points are good to track? Adapted from D. Hoiem

  12. Interest points original • Suppose you have to click on some point, go away and come back after I deform the image, and click on the same points again. • Which points would you choose? deformed D. Hoiem

  13. Choosing interest points Where would you tell your friend to meet you?  Corner detection D. Hoiem

  14. Choosing interest points Where would you tell your friend to meet you?  Blob detection D. Hoiem

  15. Application 1: Keypoint Matching for Search In database 1. Find a set of distinctive key- points Query 2. Define a region around each keypoint (window) A1 B3 A2 A3 3. Compute a local descriptor from the region B2 B1 4. Match descriptors Adapted from K. Grauman, B. Leibe

  16. Application 1: Keypoint Matching For Search Goal: We want to detect points that are repeatable and distinctive • Repeatable: so that if images are slightly different, we can still retrieve them • Distinctive: so we don’t retrieve irrelevant content In database Query Adapted from D. Hoiem

  17. Application 2: Panorama stitching • We have two images – how do we combine them? L. Lazebnik

  18. Step 2: match features Application 2: Panorama stitching • We have two images – how do we combine them? Step 1: extract features L. Lazebnik

  19. Application 2: Panorama stitching • We have two images – how do we combine them? Step 1: extract features Step 2: match features Step 3: align images L. Lazebnik

  20. Application 2: Panorama stitching • No chance to find true matches, yet we have to be able to run the detection procedure independently per image. • We want to detect (at least some of) the same points in both images  want repeatability of the interest operator Adapted from K. Grauman

  21. Application 2: Panorama stitching • We want to be able to reliably determine which point goes with which  want operator distinctiveness • Must provide some invariance to geometric and photometric differences between the two views, without finding many false matches ? Adapted from K. Grauman

  22. “flat” region:no change in all directions “edge”:no change along the edge direction “corner”:significant change in all directions Corners as distinctive interest points • We should easily recognize the keypoint by looking through a small window • Shifting a window in anydirection should give a large change in intensity A. Efros, D. Frolova, D. Simakov

  23. “edge”:no change along the edge direction “corner”:significant change in all directions Corners as distinctive interest points • We should easily recognize the keypoint by looking through a small window • Shifting a window in anydirection should give a large change in intensity “flat” region:no change in all directions Adapted from A. Efros, D. Frolova, D. Simakov

  24. “flat” region:no change in all directions “corner”:significant change in all directions Corners as distinctive interest points • We should easily recognize the keypoint by looking through a small window • Shifting a window in anydirection should give a large change in intensity “edge”:no change along the edge direction Adapted from A. Efros, D. Frolova, D. Simakov

  25. “flat” region:no change in all directions “corner”:significant change in all directions Corners as distinctive interest points • We should easily recognize the keypoint by looking through a small window • Shifting a window in anydirection should give a large change in intensity “edge”:no change along the edge direction Adapted from A. Efros, D. Frolova, D. Simakov

  26. “flat” region:no change in all directions Corners as distinctive interest points • We should easily recognize the keypoint by looking through a small window • Shifting a window in anydirection should give a large change in intensity “edge”:no change along the edge direction “corner”:significant change in all directions Adapted from A. Efros, D. Frolova, D. Simakov

  27. What points would you choose? 1 4 2 3 5 K. Grauman

  28. Window function w(x,y) = or Shifted intensity Window function 1 in window, 0 outside Gaussian Intensity Harris Detector: Mathematics Window-averaged squared change of intensity induced by shifting the image data by [u,v]: D. Frolova, D. Simakov

  29. Shifted intensity Window function Intensity Harris Detector: Mathematics Window-averaged squared change of intensity induced by shifting the image data by [u,v]: E(u, v) v v v u u u D. Frolova, D. Simakov

  30. Harris Detector: Mathematics Expanding I(x,y) in a Taylor series expansion, we have, for small shifts [u,v], a quadratic approximation to the error surface between a patch and itself, shifted by [u,v]: where M is a 2×2 matrix computed from image derivatives: D. Frolova, D. Simakov

  31. Harris Detector: Mathematics Notation: K. Grauman

  32. What does the matrix M reveal? Since M is symmetric, we have The eigenvalues of M reveal the amount of intensity change in the two principal orthogonal gradient directions in the window. K. Grauman

  33. “flat” region: “edge”: “corner”: Corner response function 1 >> 2 1 and 2 are small 1 and 2 are large,1 ~ 2 2 >> 1 Adapted from A. Efros, D. Frolova, D. Simakov, K. Grauman

  34. Harris Detector: Mathematics Measure of corner response: (k – empirical constant, k = 0.04-0.06) D. Frolova, D. Simakov

  35. Harris Detector: Algorithm • Compute image gradients Ix and Iy for all pixels • For each pixel • Compute by looping over neighbors x, y • compute • Find points with large corner response function R (R > threshold) • Take the points of locally maximum R as the detected feature points (i.e., pixels where R is bigger than for all the 4 or 8 neighbors) (k :empirical constant, k = 0.04-0.06) D. Frolova, D. Simakov

  36. Example of Harris application 1 4 2 3 5 K. Grauman

  37. Example of Harris application • Corner response at every pixel K. Grauman

  38. More Harris responses Effect: A very precise corner detector. D. Hoiem

  39. More Harris responses D. Hoiem

More Related