1 / 19

Hyper Least Squares and its applications

Prasanna Rangarajan. Dr.Kenichi Kanatani. Dr. Yasuyuki Sugaya. Dr. Hirotaka Nitsuma. sugaya@ics.tut.ac.jp. Hyper Least Squares and its applications. kanatani@suri.cs.okayama-u.ac.jp niitsuma@suri.cs.okayama-u.ac.jp. prangara@smu.edu. Technical Summary

roza
Download Presentation

Hyper Least Squares and its applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prasanna Rangarajan Dr.Kenichi Kanatani Dr. Yasuyuki Sugaya Dr. Hirotaka Nitsuma sugaya@ics.tut.ac.jp Hyper Least Squares and its applications kanatani@suri.cs.okayama-u.ac.jp niitsuma@suri.cs.okayama-u.ac.jp prangara@smu.edu • Technical Summary • NEW Least Squares estimator that maximizes accuracy of estimate by removing statistical bias up to second order noise terms • Perfect candidate for initializing Maximum Likelihood estimator • Provides estimates in large noise situations, where ML computation fails

  2. Least Squares Parameter Estimation“Big Picture” • standard task in science and engineering • vast majority can be formulated as the solution of a set of implicit functions that are linear in the unknown parameter measurement unknown parameter carrier • Example-1 : conic fitting • Example-2 : estimating homography ( or perspective warp ) • useful for cerating panaromas • Example-2 : estimating homography ( or perspective warp )

  3. Least Squares Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter avoids trivial solution how well does the parameteric surface fit the data ? • Example : Ellipse fitting ( Standard Least Squares )

  4. Least Squares Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter avoids trivial solution how well does the parameteric surface fit the data ? • Example : Ellipse fitting ( Bookstein , CGIP 1979 )

  5. Least Squares Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter avoids trivial solution how well does the parameteric surface fit the data ? • Example : Ellipse fitting ( Fitzgibbon et.al , TPAMI 1999 )

  6. Least Squares Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter avoids trivial solution how well does the parameteric surface fit the data ? • Example : Ellipse fitting ( Taubin , TPAMI 1991 )

  7. Least Squares Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter Advantages • is obtained as solution to the Generalized Eigenvalue problem Disadvantages • produces biased estimates that heavily depend on choice of • algebraic distance is neither geometrically / statistically meaningful The alternative to LS Parameter Estimation is “Maximum Likelihood Parameter Estimation” parameter normalization avoids trivial solution algebraic distance how well does the parameteric surface fit the data ?

  8. Maximum Likelihood Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter • Example : Ellipse fitting TODO : insert picture of orthogonal ellipse fitting Mahalanobis distance

  9. Maximum Likelihood Parameter EstimationMathematical formulation • Given a set of noisy measurements find parameter Advantages • geometrically / statistically meaningful distance measure • highly accurate estimates that nearly achieve the lower bound on MSE Disadvantages • Iterative & converges to local minimum….requires good initial estimate • FNS ( Chojnacki et.al, TPAMI 2000, et.al 2008 ) • HEIV ( Leedan & Meer, IJCV 200, Matei & Meer, TPAMI 2006 ) • Projective Newton Iterations ( Kanatani & Sugaya CSDA 2007 ) An accurate LS estimate can only aid the ML estimator. For large noise levels, LS estimators may be the only available choice. Mahalanobis distance

  10. Proposed Work A different take on LS parameter estimation how good is the estimate ? unique to each LS estimator how well does the parameteric surface fit the data ? common to all LS estimators • Contributions of proposed work • Statistical Basis for LS parameter estimation • How does the choice of the matrix affect the acuracy of the LS estimate ? • What chocie of matrix yields a LS estimate with the best accuracy ?

  11. Contributions of Proposed Work Statistically motivated LS Parameter Estimation • How does the choice of the matrix affect the acuracy of the LS estimate ? Perturbation Analysis of the GEVP • measurements are perturbations of true measurements • propagate perturbations in measurements to carrier • propagate perturbations in carrier to

  12. Contributions of Proposed Work Statistically motivated LS Parameter Estimation Any attempt to improve the accuracy of a LS estimator, must reduce the bias of the LS estimator • How does the choice of the matrix affect the acuracy of the LS estimate ?Perturbation Analysis of the GEVP • expression for mean squared error of estimator accuracy does not depend on matrix depends on matrix

  13. Contributions of Proposed Work Statistically motivated LS Parameter Estimation • 2. What chocie of matrix yields a LS estimate with the best accuracy ? symmetric matrix 2nd order bias matrix for taubin estimator

  14. hyper Least Square estimator Optimization Problem solved by the hyper Least Squares estimator Numerical computation of the hyper LS estimate The matrix is symmetric but not necessarily positive definite. The matrix is symmetric &positive definite for noisy measurements Use standard linear algebra routines to solve the GEVP for the eigenvector corresponding to the largest eigenvalue

  15. How well does the hyper LS estimator work ?Ellipse Fitting ( single implicit equation ) • Zero mean Gaussian noise with standard deviation is added to 31 points on the ellipse • 10000 independent trials • Standard LS estimator • Taubin estimator • hyper LS estimator • ML estimator

  16. How well does the hyperaccurate LS estimator work ?Ellipse Fitting • ML has the highest accuracy but fails to converge for large noise • The bias of the hyperaccurate LS estimator < ML estimator Standard LS Taubin hyper LS ML estimator KCR lower bound standard deviation of added noise standard deviation of added noise

  17. How well does the hyperaccurate LS estimator work ?Homography estimation • Zero mean Gaussian noise with standard deviation is added to a grid of points on a plane • 10000 independent trials • Standard LS estimator • Taubin estimator • hyper LS estimator • ML estimator View-1 View-2 standard deviation of added noise

  18. How well does the hyperaccurate LS estimator work ?Fundamental matrix estimation • Zero mean Gaussian noise with standard deviation is added to a curved grid of points • 10000 independent trials • Standard LS estimator • Taubin estimator • hyper LS estimator • ML estimator View-1 View-2 standard deviation of added noise

  19. Closing Thoughts • Proposed hyper Least Squares estimator is • statistically motivated • designed to have the highest accuracy among existing LS estimators • provides a parameter estimate for large noise levels, when the ML estimator fails • The proposed hyper Least Squares estimator is the best candidate for initializing ML iterations • Open Problems • Is there a matrix that directly minimizes the mse instead of just the bias does NOT depend on matrix depends on matrix

More Related