670 likes | 763 Views
Introduction to Computer Vision Lecture 09. Roger S. Gaborski. 1. Extract and Analyze Brandy. im=imread('IMGP1715.JPG'); >> imSm = imresize(im, .25); >> figure,imshow(imSm). Roger S. Gaborski. 2. Approaches. Gray scale thresholding . Roger S. Gaborski. 3. 1.Gray scale thresholding.
E N D
Introduction to Computer VisionLecture 09 Roger S. Gaborski 1
Extract and Analyze Brandy im=imread('IMGP1715.JPG'); >> imSm = imresize(im, .25); >> figure,imshow(imSm) Roger S. Gaborski 2
Approaches Gray scale thresholding Roger S. Gaborski 3
1.Gray scale thresholding • Approach – First convert to gray scale (losing color • information), then threshold • >> imSmGray = rgb2gray(imSm); • >> imSmGray = im2double(imSmGray); • >>figure, imshow(imSmGray) • >>figure, imshow(im2bw(imSmGray,graythresh(imSmGray))) Roger S. Gaborski 4
Clearly, unsuccessful WHY DID IT FAIL?? The intensity value of pixels of Brandy is very close to intensity values of pixels of the background, which makes it hard to segment based on intensity distribution. Roger S. Gaborski 5
Grayscale Histogram >> max(imSmGray(:)) ans = 0.9804 >> min(imSmGray(:)) ans = 0.0510 >> figure, imhist(imSmGray) No clear separation line of dog and background Roger S. Gaborski 6
Approaches Gray scale thresholding Detect edges and then segment Roger S. Gaborski 7
2. Edge Detection: Sobel Roger S. Gaborski 8
2. Edge Detection: Laplacian of Gaussian Roger S. Gaborski 9
2. Edge Detection: Canny Roger S. Gaborski 10
Reason of the failures • Gray scale thresholding and edge detection: • Both these 2 algorithms work in gray scale space, only taking into account of intensity values of the pixel. However, the intensity value of the dog and the grass is very similar to each other, which makes the noise very hard to eliminate. The edge detection algorithms also fail in this case. • They ignore the most informative component: distinct colors of the brown dog and the green grass
Approaches Gray scale thresholding Detect edges and then segment Color segmentation Color spaces : RGB Euclidean distance Mahananobis distance Roger S. Gaborski 12
3. Color Segmentation: Euclidean Distance Manually select pixels Roger S. Gaborski 13
3. Color Segmentation: Mahalanobis Distance Brandy Noise from brown earth Manually select pixels Roger S. Gaborski 14
Original Brandy picture Have very similar color with brandy Roger S. Gaborski 15
Discussion None of the 3 planes will work for the segmentation of Brandy and the grass. However, by combining B, G, and B planes together, we can roughly segment Brandy from the grass by Euclidean distance and achieve desirable segmentation by Mahalanobis distance (taking into account of correlations between different color planes). Roger S. Gaborski 16
Individual Color Planes >> figure, subplot(2,3,1),imshow(imSm(:,:,1)),title('Red') >> subplot(2,3,2),imshow(imSm(:,:,2)),title('Green') >> subplot(2,3,3),imshow(imSm(:,:,3)),title('Blue') >> subplot(2,3,4),imshow(im2bw(imSm(:,:,1),graythresh(imSm(:,:,1)))) title('Red Threshold') >> subplot(2,3,5),imshow(im2bw(imSm(:,:,2),graythresh(imSm(:,:,2)))) title('Green Threshold') >> subplot(2,3,6),imshow(im2bw(imSm(:,:,3),graythresh(imSm(:,:,3)))) title('Blue Threshold') Roger S. Gaborski 17
HSV Color Space Seems doesn’t work when combining HSV together >> imH = rgb2hsv(imSm); >> figure, imshow(imH) Roger S. Gaborski 18
Distinct hues of brown color (brandy) and green color (grass) Perfect for separating the dog from the background Hard to distinguish >> figure, subplot(1,3,1),imshow(imH(:,:,1)),title('Hue') >> subplot(1,3,2),imshow(imH(:,:,2)),title('Saturation') >> subplot(1,3,3),imshow(imH(:,:,2)),title('Value') Roger S. Gaborski 19
Imhist(imH(:,:,1)) Grass distribution Dog distribution Separating line Histogram distribution in Hue space Roger S. Gaborski 20
Dog pixels gray level value = 0 Still, very similar hue with Brandy >> level = graythresh(imH(:,:,1)) level = 0.1725 (automatic threshold) >> figure, imshow(imH(:,:,1)>level) Roger S. Gaborski 21
Summary Color is the ideal descriptor in segmenting Brandy from the grass (distinct colors) Edge detection algorithms fail when the intensity values of adjacent pixels are very similar with each other We will continue with color segmentation and morphological processing in next lecture Follow up assignments on region growing and color segmentation will be posted on course website shortly. You will be informed when they are posted.
Approaches • We look at two approaches for color segmentation: • Region segmentation using a distance measurements • Region growing using seeds and a distance measurement
L*a*b Color Space • 'L*' luminosity or brightness layer, • 'a*' chromaticity layer indicating where color falls along the red-green axis • 'b*' chromaticity layer indicating where the color falls along the blue-yellow axis.
Distance Measure in L*a*b Space • Only use a and b planes • Manually sample image • Estimate mean of a and b values • Calculate distance as before
K-means Clustering • MATLAB: IDX = kmeans(X,k) partitions the points in the n-by-p data matrix X into k clusters. • How many clusters? k • Distance measure: Euclidean
Very Simple Example • Consider 1 dimensional data (algorithm works with n dimensional data Assume k = 2 Assume cluster centers, randomly pick 2 values
Very Simple Example • Measure distance between centers and remaining points Assign points to closer center Recalculate centers based on membership 1 = 1 (2+3+5+6+7+8+9)/7 = 5.7143
Very Simple Example 1.000 5.7143 Assign points to closer new center Recalculate centers based on membership (1+2+3)/3 = 2.0 (5+6+7+8+9)/5 = 7.0
Very Simple Example No points reassigned, done Z1 = 2.0 Z2 = 7.0
K-means Clustering • K-means algorithm has self organizing properties • n-dimensional vectors may be considered points in a n-dimensional Euclidean space By a Euclidean space we mean a Rn space with a definition of distance between vectors x and y as: d(x,y) = sqrt{ (x1-y1)2+(x2-y2)2+…+(xn-yn)2} Euclidean norm or length of x ||x|| = sqrt{ x12 + x22 +…+xn2 } • K-means is one of many techniques that uses the notion of clustering by minimum distance Why does using minimum distance make sense?
K-means Clustering • Two vectors that represent points in n space that are geometrically close may in some sense belong together • Notation: • Norm or length of vector x: ||x||= xi2 • Distance between two vectors: ||x-z||= ( xi-zi)2 1/2 i 1/2 i
K-Means Algorithm • We measure how close vectors are • We establish cluster points and partition vectors into these clusters such that the distance between a vector and the cluster it is assigned to is minimum with regards to the other points • With k-means you need to know the number of cluster centers.
K-Means Algorithm • Set of input vectors: {x(1), x(2),…,x(p) • z represents cluster center for each of k clusters. It points to the position in Euclidean space which the cluster center is located, since there are k centers, there are z1, z2, … zk • Sj = { } represents set of samples that belong to jth cluster
Procedure • Initialize Choose number of clusters, k For each of k clusters, choose an initial center { z1(l), z2(l), … zk(l) }, where zj(l) represents the value of the jth cluster at the lth iteration 2. Distribute all the sample vectors Assign each sample vector x(p) to one of the cluster algorithms: x(p) Sj(l) if || x(p)-zj(l) || < || x(p) – zi(l) For all I=1,2,3,…k, for i j
Procedure 2 • Calculate new cluster centers Using new cluster membership, recalculate each center such that the sum of distance from each member to the new cluster is minimized. zj(l+1) = 1/Nj x(p) Nj is the number of samples associated to Sj If zj(l+1) = zj(l), no cluster center has changed. Otherwise, go to step 2 and reassign vectors x(p) sj(l)
Convert RGB color image to L*a*b color space • Classify pixels in L*a*b space using k-means clustering • Ignore L information, use a and b • Label pixels and display