140 likes | 160 Views
A Robust Super Resolution Method for Images of 3D Scenes. Pablo L. Sala Department of Computer Science University of Toronto. Super Resolution
E N D
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto
Super Resolution • “The process of obtaining, from a set of images that overlap, a new image that in the regions of overlapping has a higher resolution than that of each individual image.” • This work: • Introduces a method for robust super resolution • Applies it to obtain higher resolution images of a 3D scene from a set of calibrated low-resolution images under the assumption that the scene can be approximated by planes lying in 3D space.
The Registration Problem The algorithm presented here starts from a discreet set of planes in 3D space and for each plane it back-projects all the input images onto that plane and performs super-resolution . Only the image pixels of 3D points of the scene that lie on the plane will be back-projected to the same points in the plane.
Problem Formulation • Assumptions: • The scene can be thought of as mostly lying on planes in 3D space. • All viewpoints are located in a region separable by a plane from the scene, so to hold to a notion of scene depth. • The Idea: • A set of N calibrated images of the scene. • A list of planes ordered by its depth. • Images set is partitioned in disjoint subsets of relatively same size. • Super-resolution attempted on each plane using the back-projection of the images of each subset on that plane. • The obtained higher resolution images are compared. The regions of coincidence will be assumed to correspond to parts of the scene lying on the plane. • The portions of each image that correspond to these regions will not be taken into account when applying super-resolution on the following planes.
Standard Super-Resolution Super-resolution can be formulated as an optimization problem: X is the unknown higher resolution image xj are the low resolution images Fj = Dj Hj Wj are the imaging formation matrices, with Dj a decimation, Hj a blurring, and Wj a geometric warp matrix. Deriving E with respect to X and equaling it to 0 we get to:
Robust Super-Resolution Objective function: Error: Cauchy’s robust estimator:
Robust Super-Resolution Deriving E with respect to X: where Wj is a diagonal matrix such that: Iteratively re-weighted least squares: Initial guess X0: the higher resolution image computed using the standard version of super-resolution.
The Algorithm • Input: L a list of planes in 3D space; Ij images; Pj camera matrices • Set O = (set of the occluding 3D points) • Randomly partition the images set into two or more disjoint subsets S1,..., SR. • For each plane in L do: • For each subset Si do: • Use , Pj and O to compute the image formation matrices Fj • X0 = S-Resolution of images in Si • Yi = Robust S-R of images in Si, using X0 as the initial guess • R = region R of where all Yi coincide • Assume R is the portion of the scene that lies on • O = O R
Observations • For the algorithm to be effective: • The 3D planes uniformly distributed, close one to another, and covering all scene depth, so to detect all scene portions to be aware of occlusions. • The scene should be highly textured for the similarity check to work well.
Experimental Results Synthetic scene:
Experimental Results • 12 images were used as input • 2 disjoint subsets
Experimental Results Standard vs Robust Super-Resolution:
Experimental Results Output images:
Observations and Future Work • Results get worse as we advance in the 3D planes due to errors in defining precisely what portion of the scene lies in each plane • Using more images and partitioning the images set in more than just two subsets will definitely give more accuracy to the similarity check. • Using color images instead of just B&W. The super-resolution steps of the algorithm would be applied to each color channel separately and similarity would be assumed when the difference in reconstruction is close to zero for all three channels.