900 likes | 1.01k Views
CS 636 Computer Vision. Line Features. Nathan Jacobs. review. What properties should feature points have? discriminant easy to localize robust to viewpoint/photometric changes What local properties can you use to find them? all the same thing: second moment matrix, structure tensor, .
E N D
CS 636 Computer Vision Line Features Nathan Jacobs
review • What properties should feature points have? • discriminant • easy to localize • robust to viewpoint/photometric changes • What local properties can you use to find them? • all the same thing: second moment matrix, structure tensor,
overview • traditional edge detection • where do they come from? • depth and texture • classifying edges • edge algorithms • extended topics • straight line edges • learned edge detection • live coding
Edge Detection • Convert a 2D image into a set of curves • Extracts salient features of the scene • More compact than pixels S. Narasimhan
Origin of Edges • Edges are caused by a variety of factors surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity
Edge Types Step Edges Line Edges Roof Edge
Real Edges • Edge Magnitude • Edge Orientation • High Detection Rate and Good Localization Noisy and Discrete! We want an Edge Operator that produces:
Gradient • Gradient equation: • Represents direction of most rapid change in intensity • Gradient direction: • The edge strength is given by the gradient magnitude
Ideal edge Unit step function: Image intensity (brightness): Theory of Edge Detection
Partial derivatives (gradients): • Squared gradient: Edge Magnitude: (normal of the edge) Edge Orientation: Rotationally symmetric, non-linear operator Theory of Edge Detection • Image intensity (brightness):
Partial derivatives (gradients): • Laplacian: Rotationally symmetric, linear operator zero-crossing Theory of Edge Detection • Image intensity (brightness):
Discrete Edge Operators • How can we differentiate a discrete image? Finite difference approximations: Convolution masks :
Second order partial derivatives: • Laplacian : Convolution masks : or Discrete Edge Operators (more accurate)
The Sobel Operators • Better approximations of the gradients exist • The Sobel operators below are commonly used
Comparing Edge Operators Good Localization Noise Sensitive Poor Detection Gradient: Roberts (2 x 2): Sobel (3 x 3): Sobel (5 x 5): Poor Localization Less Noise Sensitive Good Detection
Where is the edge?? Effects of Noise • Consider a single row or column of the image • Plotting intensity as a function of position gives a signal
Look for peaks in Solution: Smooth First Where is the edge?
Derivative Theorem of Convolution …saves us one operation.
Laplacian of Gaussian (LoG) Laplacian of Gaussian Laplacian of Gaussian operator Where is the edge? Zero-crossings of bottom graph !
2D Gaussian Edge Operators Gaussian Derivative of Gaussian (DoG) Laplacian of Gaussian Mexican Hat (Sombrero) • is the Laplacian operator:
Canny Edge Operator • Smooth image I with 2D Gaussian: • Find local edge normal directions for each pixel • Compute edge magnitudes • Locate edges by finding zero-crossings along the edge normal directions (non-maximum suppression)
Non-maximum Suppression • Check if pixel is local maximum along gradient direction • requires checking interpolated pixels p and r
Edge Thresholding • Standard Thresholding: • Can only select “strong” edges. • Does not guarantee “continuity”. • Hysteresis based Thresholding (use two thresholds) Example: For “maybe” edges, decide on the edge if neighboring pixel is a strong edge.
The Canny Edge Detector original image (Lena)
The Canny Edge Detector magnitude of the gradient
The Canny Edge Detector After non-maximum suppression
Canny Edge Operator original Canny with Canny with • The choice of depends on desired behavior • large detects large scale edges • small detects fine features
larger The effect of scale on edge detection larger Scale space (Witkin 83)
Difference of Gaussians (DoG) • Laplacian of Gaussian can be approximated by the difference between two different Gaussians
(a) (b) DoG Edge Detection (b)-(a)
recap so far… • simple, bottom up, edge detection • What else could we do? • find lines • other low level cues • How could we incorporate top down information?
finding lines Hough Transform or RANSAC
Supervised Learning ofEdges and Object Boundaries, CVPR 2006 Piotr Dollár Zhuowen Tu Serge Belongie
Outline • I. Motivation • II. Problem formulation • III. Learning architecture (BEL) • IV. Results
Outline • I. Motivation • Why edges? • Why not edges? • Why learning? • II. Problem formulation • III. Learning architecture (BEL) • IV. Results
Why edges? • Reduce dimensionality of data • Preserve content information • Useful in applications such as: • object detection • structure from motion • tracking
Why not edges? But, not that useful, why? Difficulties: • Modeling assumptions • Parameters • Multiple sources of information (brightness, color, texture, …) • Real world conditions Is edge detection even well defined?
1. smooth 2. gradient 3. thresh, suppress, link Canny edge detection Canny is optimal w.r.t. some model.
Canny edge detection 1. smooth 2. gradient 3. thresh, suppress, link And yet…
Canny difficulties • Modeling assumptions • Step edges, junctions, etc. • Parameters • Scales, threshold, etc. • Multiple sources of information • Only handles brightness • Real world conditions • Gaussian iid noise? Texture…
Modern methods • Modeling assumptions • Complex models, computationally prohibitive • Parameters • Many, may use learning to help tune • Multiple sources of information • Typically brightness, color, and texture cues • Real world conditions • Aimed at real images
Modern methods (Pb) Pb – Martin et al. PAMI04
Why learning? • Modeling assumptions • minimal • Parameters • none • Multiple sources of information • Automatically incorporated • Real world conditions • training data
Outline • I. Motivation • II. Problem formulation • III. Learning architecture (BEL) • IV. Results
Problem formulation (general) image scene interpretation that can include spatial location and extent of objects, regions, object boundaries, curves, etc. 0/1 function that encodes spatial extent of a component of W Obtaining optimal or likely W or SW can be difficult. Let: We seek to learn this distribution directly from image data. To further reduce complexity, we can discard the absolute coordinates of S: where N(c) is the neighborhood of I centered at c.
Problem formulation (edges) • image segmentation • 1 on boundaries of segments, 0 elsewhere
Discriminative framework Goal is to learn from human labeled images Given an image I and n interpretations W obtained by manual annotation, we can compute: Sample positive and negative patches according to above: Finally train a classifier!