370 likes | 974 Views
Lucas-Kanade Image Alignment. Slides from Iain Matthews. Applications of Image Alignment. Ubiquitous computer vision technique Tracking Registration of MRI/CT/PET. Generative Model for an Image. Parameterized model. shape. Parameters. Image. appearance. Fitting a Model to an Image.
E N D
Lucas-Kanade Image Alignment Slides from Iain Matthews
Applications of Image Alignment • Ubiquitous computer vision technique • Tracking • Registration of MRI/CT/PET
Generative Model for an Image • Parameterized model shape Parameters Image appearance
Fitting a Model to an Image • What are the best model parameters to match an image? shape Parameters Image appearance • Nonlinear optimization problem
Active Appearance Model Appearance Warp to reference Region of interest Landmarks Shape • Cootes, Edwards, Taylor, 1998
Image Alignment Template, T(x) Warp, W(x;p+p) Warp, W(x;p) Image coordinates x = (x, y)T Warp parameters, p = (p1, p2, …, pn)T Image, I(x)
Want to: Minimize the Error Template, T(x) Warped, I(W(x;p)) T(x) - I(W(x;p)) • Warp image to get compute
How to: Minimize the Error - = Solution: solve for increments to current estimate, Minimise SSD with respect to p, Generally a nonlinear optimisation problem , … how can we solve this?
Linearize For image alignment: Taylor series expansion, linearize function f about x0:
Gradient Descent Solution Error Image Solution, Gradient Hessian Jacobian Least squares problem, solve for p
Gradient Images Ix Iy • Compute image gradient W(x;p) W(x;p) I(W(x;p))
Jacobian = 1 2 4 3 • Compute Jacobian Mesh parameterization 4 1 4 1 Warp, W(x;p) Template, T(x) Image, I(x) 3 2 3 2 Image coordinates x = (x, y)T Warp parameters, p = (p1, p2, …, pn)T = (dx1, dy1, …, dxn, dyn)T
Lucas-Kanade Algorithm • Warp I with W(x;p)I(W(x;p)) • Compute error image T(x) - I(W(x;p)) • Warp gradient of I to compute I • Evaluate Jacobian • Compute Hessian • Compute p • Update parameters p p + p - =
Fast Gradient Descent? • To reduce Hessian computation: • Make Jacobian simple (or constant) • Avoid computing gradients on I
Shum-Szeliski Image Aligment T(x) T(x) W(x;p) W(x;p+p) • Compositional Alignment – Shum, Szeliski I(x) W(x;p)o W(x;p) W(x;p) W(x;p) I(W(x;p)) I(x) • Additive Image Alignment – Lucas, Kanade W(x;0 + p) = W(x;p)
Compositional Image Alignment T(x) W(x;p)o W(x;p) W(x;p) W(x;p) I(W(x;p)) I(x) Minimise, Jacobian is constant, evaluated at (x, 0) “simple”.
Compositional Algorithm • Warp I with W(x;p)I(W(x;p)) • Compute error image T(x) - I(W(x;p)) • Warp gradient of I to compute I • Evaluate Jacobian • Compute Hessian • Compute p • Update W(x;p) W(x;p)o W(x;p) - =
Inverse Compositional • Why compute updates on I? • Can we reverse the roles of the images? • Yes! [Baker, Matthews CMU-RI-TR-01-03] Proof that algorithms take the same steps (to first order)
Inverse Compositional T(x) T(x) W(x;p)o W(x;p) W(x;p)o W(x;p)-1 W(x;p) W(x;p) W(x;p) W(x;p) I(W(x;p)) I(W(x;p)) I(x) I(x) • Forwards compositional • Inverse compositional
Inverse Compositional • Minimise, • Solution • Update
Inverse Compositional • Jacobian is constant- evaluated at (x, 0) • Gradient of template is constant • Hessian is constant • Can pre-compute everything but error image!
Inverse Compositional Algorithm • Warp I with W(x;p)I(W(x;p)) • Compute error image T(x) - I(W(x;p)) • Warp gradient of I to compute I • Evaluate Jacobian • Compute Hessian • Compute p • Update W(x;p) W(x;p)o W(x;p)-1 - =
Framework • Baker and Matthews 2003 Formulated framework, proved equivalence
Lucas-Kanade Algorithm Criterion :