710 likes | 815 Views
Introduction to Computer Vision. Lecture 10 Dr. Roger S. Gaborski. Morphological Processing. Grayscale Morphological Processing. Dilation: local maximum operator Maximum value neighborhood Erosion: local minimum operator Minimum value in neighborhood. Original Color Image.
E N D
Introduction to Computer Vision Lecture 10 Dr. Roger S. Gaborski
Grayscale Morphological Processing • Dilation: local maximum operator • Maximum value neighborhood • Erosion: local minimum operator • Minimum value in neighborhood Roger S. Gaborski
Original Color Image Cropped, Grayscale and Resized Roger S. Gaborski
se7 = strel('square' , 7); Roger S. Gaborski
Grayscale Morphological Processing • Morphological Gradient: Dilated image – eroded image • Half-gradient by erosion: Image – eroded image • Half-gradient by dilation: Image – dilated image Roger S. Gaborski
Morphological Gradient: Dilated image – Eroded image se = ones(3) Roger S. Gaborski
Morphological Gradient: Dilated image – Eroded image se = ones(7) Roger S. Gaborski
Directional Gradients • horizontal_gradient = imdilate(I,seh) - imerode(I,seh); • vertical_gradient = imdilate(I,sev) - imerode(I,sev); • Where I is the image and • seh = strel([1 1 1]); • sev = strel([1;1;1]); Roger S. Gaborski
Horizontal GradientGradient Perpendicular to Edge Roger S. Gaborski
Threshold Horizontal GradientGradient Perpendicular to Edge Roger S. Gaborski
Vertical GradientGradient Perpendicular to Edge Roger S. Gaborski
Threshold: Vertical GradientGradient Perpendicular to Edge Roger S. Gaborski
Gaussian Model for skin Detection • Skin color model based on color • Model must represent • Different skin colors • Shading issues • Lighting conditions Roger S. Gaborski
Pattern Recognition • After we make a number of measurements (color, texture, etc.) on our image we would like to classify the image into relevant regions. • In an outdoor scene we may be interested in finding people, cars, roads, signs, etc. • We may want to classify an image as either an outdoor scene or an indoor scene Roger S. Gaborski
General Pattern Classifier Feature Extractor Algorithm Color Image Feature Vector Classifier Algorithm Roger S. Gaborski
Training and Testing Datasets • Training data is used to train the classifier, or to find the parameters of a classifier. For a Gaussian classifier we need to estimate the mean and variance of the data for each class • Testing data is a separate set of data that is not used during training, but is used to test the classifier Roger S. Gaborski
One Dimensional Gaussian Distribution f(x) = (1/ 2 ) exp(- (x-)2 / 22) is the standard deviation 2 is the variance is the mean (expected value) Assume our data can be modeled by a Gaussian distribution Roger S. Gaborski
One Feature - Two Classes Class 1 Class 2 Roger S. Gaborski
The distributions on the previous page were determined from the training data • During the testing phase a feature measurement of 52 was obtained • Which class is more likely? • Prob_class1(52) is 0.0415 • Prob_class2(52) is 5.0686e-104 • It is much more likely that the feature was obtained from a class 1 object Roger S. Gaborski
Decision Boundary • The decision boundary is the feature measurement that is equally likely to be from either class • Based on the data from the previous slide the boundary is between 120 and 130 • At g = 128 the probability of being in • class1 is 1.8053e-010 • class 2 is 2.7734e-010 • Values greater than 128 will be classified as class 2 • NOTE: We are making the assumption that either class is equally likely Roger S. Gaborski
Overlapping Distributions Class 2 Class 1 Roger S. Gaborski
Consider a feature value of 100 • It is more likely that the measurement is from a class 1 object ( p = 0.0469 ) • BUT It is possible that the measurement is from a class 2 object ( p = 0.0194 ) • Consider a feature value of 105 • Class 1, p = .025 • Class 2, p = .0437 Roger S. Gaborski
Class 2 Class 1 Roger S. Gaborski
GAUSSA Data Values -5.1835 -0.2436 -3.7354 2.2846 -4.2500 0.9237 -4.7727 0.2746 -2.6909 1.7032 -1.8355 -1.9171 -3.9272 1.4428 -5.7341 1.0020 -2.7673 -1.1168 -3.5304 0.2908 -4.2395 -0.4503 -4.4181 0.7386 -7.1552 0.3653 -4.1241 -2.3290 -3.5944 0.4242 -2.4388 1.0503 -4.2229 -0.8453 -3.0669 -0.0464 -2.3523 -1.4821 -2.8434 -1.9902 Each sample measurement has two values, the first value is the value for feature 1 and the second value is for feature 2 The GAUSSx Data sets have 2000 entries Roger S. Gaborski
GAUSSA Data Roger S. Gaborski
GAUSSB Data Roger S. Gaborski
GAUSSC Data Roger S. Gaborski
GAUSSD Data Roger S. Gaborski
GAUSSE Data Roger S. Gaborski
GAUSSF Data Roger S. Gaborski
GAUSSG Data Roger S. Gaborski
GAUSSH Data Roger S. Gaborski
GAUSSA and GAUSSB Roger S. Gaborski
Model Response GAUSSA and GAUSSB contour3 Roger S. Gaborski
GAUSSF and GAUSSH Roger S. Gaborski
GAUSSD and GAUSSE Roger S. Gaborski
Gaussian Probability Distribution • Need to include all covariance terms unless the covariances of the distributions are identical Roger S. Gaborski
Multivariate Gaussian Distribution Model A point in a feature space is represented by a vector x of M dimensions Assume each of the M features has a Gaussian distribution. Represent probability of a particular feature set measurement as a multivariate Gaussian distribution 1 p(x) = (2)M/2 | |1/2 * exp( -1/2*(x-)T -1 (x- ) | | :determinant of covariance matrix -1 :inverse of covariance matrix :M component mean vector (how many features) T :transpose Roger S. Gaborski
We again would like to find the distribution with the largest probability. We proceed as follows: Take the log of both sides of the equation Gets rid of exponent Lump terms into a constant We than have: const1 = (x-)T -1 (x- ) This equation defines an ellipsoid in M dimensional space Roger S. Gaborski
Ellipsoids of constant probability for data from class 2 Equal probability point Two class problem. Feature vector to the right of the black dashed line belong to class 2 Feature 2 Decision Boundary Feature 1 Ellipsoids of constant probability for data from class 1 Roger S. Gaborski
Bivariate Gaussian Distribution, M=2 • = m1 = s11 s12 m2 s21 s22 -1 = s11 s12 s21 s22 s11s22 – s12s21 Example u = 1 = 1 -.7 || = .51 -1= 1.96 1.37 2 -.7 1 1.37 1.96 Roger S. Gaborski
Bivariate (2 Features) const1 = (x-)T -1 (x- ) x1- 1 -1x1- 1 x2- 2 x2- 2 x1- 1, x2- 2 -1x1- 1 x2- 2 T 1x2 2x2 2x1 Plug in numbers: (x1-1)2 (1.9) + 2.745(x1-1)(x2-2)+1.96(x2-2)2 = const1 ax12 + bx1x2 + cx22 = const where x1 = (x1- 1), etc. Roger S. Gaborski
ax12 + bx1x2 + cx22 = const This is an equation for an ellipse This means the contours of constant probability are ellipses EXAMPLE 2 What if the off diagonal of the covariance matrix = 0 = 0 = 1 0 -1 = 1 0 0 0 1 0 1 [ x1, x2 ] * 1 0 x1 = C 0 1 x2 [x1*1 + x2*0, x1*0+ x2*1] x1 = C = [x1 , x2] x1 =x12+x22 = C x2 x2 Contours of constant probability are circles Roger S. Gaborski