130 likes | 236 Views
Learning Conditional Random Fields for Stereo. paper by Scharstein and Pal 2007, CVPR. The Problem. Trying to map the points in one of the views to the points in the other view. Helps perceive depth The amount of shift of each pixel from one view to the other is called its disparity.
E N D
Learning Conditional Random Fields for Stereo paper by Scharstein and Pal 2007, CVPR
The Problem • Trying to map the points in one of the views to the points in the other view. • Helps perceive depth • The amount of shift of each pixel from one view to the other is called its disparity
Datasets Introduced Data Used • 30 new datasets, • each with 7 views • size of 1300 x 1100 pixels • disparity range of 150 pixels • each view with 9 different images with different lighting and exposure. • 2 of these views had ground truth disparity maps. • Used 6 datasets • with 2 views (with ground truth disparity maps) with the same exposure and lighting. • Image size down-sampled to 460x370 • disparity range of 80 pixels for tractability
Introduction • Earlier methods used MRFs, local methods, Gaussian Mixture Models (GMM) • Progress in stereo due to • Global optimization now practical • Intensity changes as a cue for disparity discontinuities • Use of CRFs for stereo • The approach uses a gradient learning approach on graph-cut minimization methods • Use of ground truth data. • It also uses pixel-based global methods for the smoothness cost instead of segmentation
CRFs for Stereo The factor graph • dp • disparity of pixel in reference (left) image • has N discrete states • cp • Matching cost for each disparity level • has N continuous RVs • The matching cost = minimum distance between left and right scan lines • gpq • Discretized color gradients between neighboring pixel • has M state RV
Learning Use of fast alpha expansion graph cuts method
Results Experiment 1 Disparity Maps, K=3 • Occluded areas are masked. 1 4 2 5 3 6 0 10 20 0 10 20 iterations iterations
Conclusion • Models with more parameters result in reduced disparity errors • Model becomes more sensitive to brightness and contrast changes as the number of gradient bins increase. • More complex models did not generalize to other datasets compared to simpler ones.