790 likes | 945 Views
776 Computer Vision. Jan-Michael Frahm, Enrique Dunn Spring 2013. Dolly Zoom. www.cs.unc.edu /~ jmf /teaching / spring2014/ 2Assignment776- Spring2014. pdf. Dolly Zoom. Sample Dolly Zoom Video. Radial Distortion. Radial Distortion.
E N D
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013
Dolly Zoom www.cs.unc.edu/~jmf/teaching/spring2014/2Assignment776-Spring2014.pdf
Radial Distortion (xu, yu) undistorted image point as in ideal pinhole camera (xd,yd) distorted image point of camera with radial distortion (xc,yc) distortion center Kn n-th radial distortion coefficient Pn n-th tangential distortion coefficient • Brown’s distortion model • accounts for radial distortion • accounts for tangential distortion (distortion caused by lens placement errors) • typically K1 is used or K1, K2, K3, P1, P2
Last Class • There are two principled ways for finding correspondences: • Matching • Independent detection of features in each frame • Correspondence search over detected features • Extends to large transformations between images • High error rate for correspondences • Tracking • Detect features in one frame • Retrieve same features in the next frame by searching for equivalent feature (tracking) • Very precise correspondences • High percentage of correct correspondences • Only works for small changes in between frames
RANSAC • Robust fitting can deal with a few outliers – what if we have very many? • Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers • Outline • Choose a small subset of points uniformly at random • Fit a model to that subset • Find all remaining points that are “close” to the model and reject the rest as outliers • Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.
RANSAC for line fitting example Least-squares fit
RANSAC for line fitting example • Randomly select minimal subset of points
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function Source: R. Raguram
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop
RANSAC for line fitting example Uncontaminated sample • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop
RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop
RANSAC for line fitting • Repeat N times: • Draw s points uniformly at random • Fit line to these s points • Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t) • If there are d or more inliers, accept the line and refit using all inliers
Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) • Consensus set size d • Should match expected inlier ratio
Adaptively determining the number of samples • Inlier ratio e is often unknown a priori, so pick worst case, e.g. 10%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 • Adaptive procedure: • N=∞, sample_count=0 • While N >sample_count • Choose a sample and count the number of inliers • Set e = 1 – (number of inliers)/(total number of points) • RecomputeN from e: • Increment the sample_count by 1
RANSAC pros and cons • Pros • Simple and general • Applicable to many different problems • Often works well in practice • Cons • Lots of parameters to tune • Doesn’t work well for low inlier ratios (too many iterations, or can fail completely) • Can’t always get a good initialization of the model based on the minimum number of samples
Universal Framework Random Sample Consenus Raguram et al. PAMI 2013
Threshold free Robust Estimation Homography • Match points • Run RANSAC, threshold ≈ 1 - 2 pixels Common approach: • Problem: Robust Estimation • Estimation of model parameters in the presence of noise and outliers
Motivation 3D similarity • Match points • Run RANSAC , threshold = ?
Motivation t = 0.001 t = 0.01 t = 0.5 t = 5.0 • Robust estimation algorithms often require data and/or model specific threshold settings • Performance degrades when parameter settings deviate from the “true” values (Torr and Zisserman 2000, Choi and Medioni 2009)
Contributions • Threshold-free robust estimation • No user-supplied inlier threshold • No assumption of inlier ratio • Simple, efficient algorithm
RECON Sort points by distance to model Inlier Outlier “Inlier” models • Instead ofper-model or point-based residual analysis • We inspect pairs of models
RECON Sort points by distance to model Inlier Outlier “Outlier” models • Instead of per-model or point-based residual analysis • We inspect pairs of models
RECON Inlier models Outlier models “Consistent” behaviour Random permutations of data points • Instead of per-model or point-based residual analysis • We inspect pairs of models
RECON Outlier models Inlier models “Consistent” behaviour Random permutations of data points • “Happy families are all alike; every unhappy family is unhappy in its own way” • - Leo Tolstoy, Anna Karenina • Instead of per-model or point-based residual analysis • We inspect pairs of models
RECON Inlier models Outlier models “Consistent” behaviour Random permutations of data points • “Happy families are all alike; every unhappy family is unhappy in its own way” • - Leo Tolstoy, Anna Karenina • Instead of per-model or point-based residual analysis • We inspect pairs of models
RECON Inlier models Outlier models “Consistent” behaviour Random permutations of data points • “Happy families are all alike; every unhappy family is unhappy in its own way” • - Leo Tolstoy, Anna Karenina • Instead of per-model or point-based residual analysis • We inspect pairs of models
RECON Inlier residuals are distributed If σ is known, can compute threshold t that finds some fraction α of all inliers Typically set to 0.95 or 0.99 αI t Large overlap • At the “true” threshold, inlier models capture a stable set of points How can we characterize this behaviour? • Assuming that measurement error is Gaussian
RECON t = ? ... t1 t2 t3 t4 tT How can we characterize this behaviour? • What if σ is unknown? • Hypothesize values of noise scale σ
RECON θt: fraction of common points θt t t How can we characterize this behaviour? • What if σ is unknown? • Hypothesize values of noise scale σ
RECON θt θt θt θt θt θt θt θt θt θt θt θt θt t How can we characterize this behaviour? • What if σ is unknown? • Hypothesize values of noise scale σ
RECON α2 θt θt θt θt θt θt θt θt θt θt θt θt θt t At the “true” value of the threshold 1. A fraction ≈ α2 of the inliers will be common to inlier models
RECON θt θt θt θt θt θt θt θt θt θt θt θt θt t At the “true” value of the threshold 1. A fraction ≈ α2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distribution
RECON Outlier models θt • Overlap is low at the true threshold • Overlap can be high for a large overestimate of the noise scale • Residuals are unlikely to correspond to the same underlying distribution t At the “true” value of the threshold 1. A fraction ≈ α2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distribution
RECON -consistency At the “true” value of the threshold 1. A fraction ≈ α2 of the inliers will be common to inlier models 2. The inlier residuals correspond to the same underlying distribution
Residual Consensus: Algorithm Similar to RANSAC 1. Hypothesis generation - Generate model Mi from random minimal sample 2. Model evaluation - Compute and store residuals of Mi to all data points 3. Test α – consistency - For a pair of models Miand Mj • Check if overlap ≈ α2 for some t • Check if residuals come from the same distribution
Residual Consensus: Algorithm Find consistent pairs of models Efficient implementation: use residual ordering Single pass over sorted residuals 1. Hypothesis generation - Generate model Mi from random minimal sample 2. Model evaluation - Compute and store residuals of Mi to all data points 3. Test α – consistency - For a pair of models Miand Mj • Check if overlap ≈ α2 for some t • Check if residuals come from the same distribution
Residual Consensus: Algorithm Find consistent pairs of models Two-sample Kolmogorov-Smirnov test Reject null hypothesis at level β if 1. Hypothesis generation - Generate model Mi from random minimal sample 2. Model evaluation - Compute and store residuals of Mi to all data points 3. Test α – consistency - For a pair of models Miand Mj • Check if overlap ≈ α2 for some t • Check if residuals come from the same distribution
Residual Consensus: Algorithm Repeat until K pairwise-consistent models are found 1. Hypothesis generation - Generate model Mi from random minimal sample 2. Model evaluation - Compute and store residuals of Mi to all data points 3. Test α – consistency - For a pair of models Miand Mj • Check if overlap ≈ α2 for some t • Check if residuals come from the same distribution
Residual Consensus: Algorithm Repeat until K pairwise-consistent models are found 1. Hypothesis generation - Generate model Mi from random minimal sample 2. Model evaluation - Compute and store residuals of Mi to all data points 3. Test α – consistency - For a pair of models Miand Mj • Check if overlap ≈ α2 for some t • Check if residuals come from the same distribution 4. Refine solution - Recompute final model and inliers