290 likes | 963 Views
Image co-registration. For remote sensing applications. Image Co-registration for Remote Sensing. Aerial images need to be co-registered for comparison with those already co-registered with the ground.
E N D
Image co-registration For remote sensing applications
Image Co-registration for Remote Sensing • Aerial images need to be co-registered for comparison with those already co-registered with the ground. • Example: co-registration with maps or other images obtained at different times or from different angles. • Co-registration means transformation such that coordinates are mapping. • Depends on the type of distortion – different types of geometrical transformations are to be applied.
Problem • Possible geometric distortions for aerial photos • Rotation • Scaling • Shift • Shear • Tilt
Type of transformation to use • 'linearconformal' when shapes in the input image are unchanged, but the image is distorted by some combination of translation, rotation, and scaling. Straight lines remain straight, and parallel lines are still parallel.
Type of transformation to use • 'affine' when shapes in the input image exhibit shearing. Straight lines remain straight, and parallel lines remain parallel, but rectangles become parallelograms.
Type of transformation to use • 'projective' when the scene appears "tilted." Straight lines remain straight, but parallel lines converge toward "vanishing points" (which may or may not fall within the image.
Linear Conformal transform x’ = s ۰x۰ cos(α) + s ۰sin (α) + Tx y’ = s۰y۰ sin(α) + s۰cos(α) + Ty where • s is a scaling coefficient, • α – is a rotation parameter, • Tx and Ty are shift parameters in X and Y directions, respectively, • (x,y) and (x’,y’) are respectively coordinates of the base and the input images. • Needs coordinates of minimum 2 points on both images to solve the system of equations
Control points These points are called control points are • some landmarks, like road intersection, else • Selected manually or automatically • Accuracy of selection is crucial for accurate co-registration • But more points can increase the accuracy of co-registration
Matrix form for conformal transform • where a matrix of transform parameters given by
Affine Transform 'affine' needs 3 pairs of control points
Projective • Needs 4 pairs of control points
Read two images unregistered = imread('westconcordaerial.png'); figure, imshow(unregistered); base= imread(‘westconcordorthophoto.png‘); figure, imshow(base); base image => Taken from an airplane, unregistered image =>
Control points selection: cpselect • The unregistered image is an RGB image but cpselect only takes grayscale images, so you will pass it one plane of the RGB image. • Control points are selected manually using cpselect and interactive tool • cpselect(unregistered(:,:,1),'westconcordorthophoto.png’) • Select points and save them
Selection, cont’d cpstruct = inputPoints: [4x2 double] basePoints: [4x2 double] inputBasePairs: [4x2 double] ids: [4x1 double] inputIdPairs: [4x2 double] baseIdPairs: [4x2 double] isInputPredicted: [4x1 double] isBasePredicted: [4x1 double]
CP tuning by cpcorr • To fine-tune the control points selected by eye use cross-correlation. • Apply the cpcorr function. input_pts_adj=cpcorr(input_points, base_points,unregistered(:,:,1), base); • The cpcorr function defines 11-by-11 regions around each control point in the input image and around the matching control point in the base image, and then calculates the correlation between the values at each pixel in the region. • Next, the cpcorr function looks for the position with the highest correlation value and uses this as the optimal position of the control point • Cannot adjust if points in different scales and rotations
Correlation • input_points = 126 293 172 203 184 169 151 210 • input_pts_adj = 126.0000 293.0000 170.5000 200.5000 187.0000 168.0000 151.9000 211.2000
Type of the transform • Because we know that the unregistered image was taken from an airplane, and the topography is relatively flat, it is likely that most of the distortion is conformal. • cp2tform will find the parameters of the distortion that best fits the stray input_points and base_points. t_concord =cp2tform(input_pts_adj,base_points,‘linear conformal’);
Registration • Even though the points were picked on one plane of the unregistered image, you can transform the entire RGB image. imtransform will apply the same transformation to each plane. registered = imtransform(unregistered,t_concord); figure, imshow(registered);
Affine:t_concord =cp2tform(input_pts_adj,base_points,‘affine’');
Projectivet_concord = cp2tform(input_pts_adj,base_points,‘projective’');
Conclude on the results • Select more control points for transformations • Compare the results
Register Las Vegas images • Read LV_92.jpg and LV_92_1.jpg from SPACE/DATA • Define the type of transform • Register images
MATHLAB code unregistered = imread('LV_92_1.jpg'); figure, imshow(unregistered); base= imread('LV_92.jpg'); figure, imshow(base); cpselect(unregistered(:,:,1),base(:,:,1)); input_pts_adj=cpcorr(input_points, base_points,unregistered(:,:,1), base(:,:,1)); t_concord = cp2tform(input_pts_adj,base_points,'projective'); registered = imtransform(unregistered,t_concord); figure, imshow(registered);
Compare Las Vegas to Las Vegas LAS VEGAS in 1972 … in 1986 … in 1992