650 likes | 959 Views
Generalized Hough Transform. The Generalized Hough Transform. From Standard to Generalized HT. Standard Hough Transform requires parametric representation for desired curve This idea is generalized in the Generalized Hough Transform. Example: Human Face recognition.
E N D
From Standard to Generalized HT • Standard Hough Transform requires parametric representation for desired curve • This idea is generalized in the Generalized Hough Transform
Example: Human Face recognition • Is there some attribute of the structure of the head that we can exploit to help estimate pose estimation? • Is this attribute invariant under change in pose? • Or • “Can we model how this attribute varies with pose?”
Hough Transform in General • Technique to isolate curves of a given shape in an image • Standard Hough Transform (HT) uses parametric formulation of curves • Generalized Hough Transform (GHT) extends for arbitrary curves
Key Idea to improve correlation by voting • When we compute the correlation by voting, we spend most of the time casting bad votes. • Idea is to use extra shape information (e.g. gradients) to cast fewer votes: • O(n) complexity: For each of O(n) points on the boundary, cast O(1) votes.
General Hough Algorithm Idea • 1. explicitly list points on shape • 2. make table for all edge pixels for target • 3. for each pixelstore its position relative to some reference point on the shape • ‘if I’m pixel i on the boundary, the reference point is at ref[i]’
The Generalized Hough Transform • Technique to find arbitrary curves in a given image • Parametric equation no longer required • Look-up table used as transform mechanism • Two phases: • R-Table Generation phase • Object Detection phase
The Generalized Hough Transform • Standard Techniques allow for invariance to scale and rotation in the plane • In general, objects in the real world are 3-dimensional • Hence a single silhouette provides no invariance to pose (i.e. rotation out of the plane). • No pose estimation. • This is generalized to Surface Normal Hough Transform
GHT: Building the R-Table 1. We are given the shape we want to localize 2. We build a lookup table for this shape, called R-Table It will replace the need for a parametric equation in the transform stage
GHT: Building the R-Table GHT: Building the R-Table
Conclusions on GHT Conclusions on GHT • Standard Techniques allow for invariance to scale and rotation in the plane • In general, objects in the real world are 3-dimensional • Hence a single silhuette provides no invariance to pose (i.e. rotation out of the plane). • No pose estimation. • Now show more details
Hough Transform for Curves • The H.T. can be generalized to detect any curve that can be expressed in parametric form: • Y = f(x, a1,a2,…ap) • a1, a2, … ap are the parameters • The parameter space is p-dimensional • The accumulating array is LARGE!
Find all desired points in image For each feature point for each pixeli on target boundary get relative position of reference point from i add this offset to position of i increment that position in accumulator Find local maxima in accumulator Map maxima back to image to view Generalized Hough Transform algorithm
(xc,yc) fi ri ai Pi Generalizing the H.T. The H.T. can be used even if the curve has not a simple analytic form! • Pick a reference point (xc,yc) • For i = 1,…,n : • Draw segment to Pi on the boundary. • Measure its length ri, and its orientation ai. • Write the coordinates of (xc,yc) as a function of ri and ai • Record the gradient orientation fi at Pi. • Build a table with the data, indexed by fi . xc = xi + ricos(ai) yc = yi + risin(ai)
f1 f2 . . . fm (r11,a11),(r12,a12),…,(r1n1,a1n1) (r21,a21),(r22,a12),…,(r2n2,a1n2) . . . (rm1,am1),(rm2,am2),…,(rmnm,amnm) fj aj rj fi ri ai Generalizing the H.T. Suppose, there were m different gradient orientations: (m <= n) (xc,yc) Pi xc = xi + ricos(ai) yc = yi + risin(ai) H.T. table
fi fi q q ak ai Srk Sri Pk Pi Generalized H.T. Algorithm: Finds a rotated, scaled, and translated version of the curve: • Form an A accumulator array of possible reference points (xc,yc), scaling factor S and Rotation angle q. • For each edge (x,y) in the image: • Compute f(x,y) • For each (r,a) corresponding to f(x,y) do: • For each S and q: • xc = xi + r(f) S cos[a(f) + q] • yc = yi + r(f) S sin[a(f) + q] • A(xc,yc,S,q) = A(xc,yc,S,q) + 1 • Find maxima of A. fj aj q Srj Pj (xc,yc) xc = xi + ricos(ai) yc = yi + risin(ai)
Another variant of the Generalized Hough Transform Find Object Center given edges Create Accumulator Array Initialize: For each edge point For each entry in table, compute: Increment Accumulator: Find Local Maxima in
Properties of Generalized Hough Transform • What can we do when the curve we want to detect is not easily described parametrically? ~ By this, we mean, it cannot be captured in a relatively small number of parameters. ~ Recall, the dimensionality of the Hough space equal the number of parameters! • The GHT constructs a parametric description of an arbitrary shape based on a learning process. • This parametric description is not, in general, compact. • We will begin by assuming the size, shape, and rotation (orientation) of the region is known a priori. (Or that we want only to detect instances of a given size and orientation. ~ The voting space is (equivalent to) image space, 2D, in the case of known size and rotation. ~ We will see how to deal with unknown orientation and size shortly -- with a 4D Hough space.
: An arbitrary reference point inside the shape. : The length of the j-th line from the reference point to the shape perimeter, intersecting at a point of tangent angle ø. : The angle of the (current) tangent(s) to the perimeter. : The orientation of the j-th line segment. The list of ( , ) pairs, for a given and constitutes a partial characterization of the shape.
• By sweeping the tangent angle (ø) over the range (0,2π) in some reasonable quantization (!), we build what is called the R-table (reference table) description of the shape. • Each pixel x (say, a detected edge point) with local orientation ø provides evidence (votes for) reference points at the set of locations indicated by the list in the R-table for that tangent direction... • A vote is cast for each (r ,) pair in the list for that ø value. The voting space is isomorphic to image space. • Again, this assumes known size and orientation for all appearances of the shape. • After all the edge points have voted for all of their possible reference points, we interrogate the voting space for significant local maxima. These suggest possible detections of the shape of interest.
• If we have not prenormalized for size (S) and rotation ( ) then our voting space is four dimensional and the reference location receiving the vote(s) for a given edge point and R-table entry is: • • Now, we interrogate the 4D accumulator array to recover likely locations, scale, and orientation for appearances of the shape. • • This is really a fancy form of a template match -- but one that is far more robust than a straightforward template matching algorithm. • • Selecting among multiple possible shapes requires multiple R-tables, multiple voting spaces. • But, so does looking for lines and circles in the same image....