1 / 71

Introduction to Computer Vision

Introduction to Computer Vision. Lecture 10 Dr. Roger S. Gaborski. Morphological Processing. Grayscale Morphological Processing. Dilation: local maximum operator Maximum value neighborhood Erosion: local minimum operator Minimum value in neighborhood. Original Color Image.

anne
Download Presentation

Introduction to Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Computer Vision Lecture 10 Dr. Roger S. Gaborski

  2. Morphological Processing

  3. Grayscale Morphological Processing • Dilation: local maximum operator • Maximum value neighborhood • Erosion: local minimum operator • Minimum value in neighborhood Roger S. Gaborski

  4. Original Color Image Cropped, Grayscale and Resized Roger S. Gaborski

  5. Roger S. Gaborski

  6. se7 = strel('square' , 7); Roger S. Gaborski

  7. Grayscale Morphological Processing • Morphological Gradient: Dilated image – eroded image • Half-gradient by erosion: Image – eroded image • Half-gradient by dilation: Image – dilated image Roger S. Gaborski

  8. Morphological Gradient: Dilated image – Eroded image se = ones(3) Roger S. Gaborski

  9. Morphological Gradient: Dilated image – Eroded image se = ones(7) Roger S. Gaborski

  10. Roger S. Gaborski

  11. Directional Gradients • horizontal_gradient = imdilate(I,seh) - imerode(I,seh); • vertical_gradient = imdilate(I,sev) - imerode(I,sev); • Where I is the image and • seh = strel([1 1 1]); • sev = strel([1;1;1]); Roger S. Gaborski

  12. Horizontal GradientGradient Perpendicular to Edge Roger S. Gaborski

  13. Threshold Horizontal GradientGradient Perpendicular to Edge Roger S. Gaborski

  14. Vertical GradientGradient Perpendicular to Edge Roger S. Gaborski

  15. Threshold: Vertical GradientGradient Perpendicular to Edge Roger S. Gaborski

  16. Gaussian Model for skin Detection • Skin color model based on color • Model must represent • Different skin colors • Shading issues • Lighting conditions Roger S. Gaborski

  17. Pattern Recognition • After we make a number of measurements (color, texture, etc.) on our image we would like to classify the image into relevant regions. • In an outdoor scene we may be interested in finding people, cars, roads, signs, etc. • We may want to classify an image as either an outdoor scene or an indoor scene Roger S. Gaborski

  18. General Pattern Classifier Feature Extractor Algorithm Color Image Feature Vector Classifier Algorithm Roger S. Gaborski

  19. Classifiers for Gaussian Data Sets

  20. Training and Testing Datasets • Training data is used to train the classifier, or to find the parameters of a classifier. For a Gaussian classifier we need to estimate the mean and variance of the data for each class • Testing data is a separate set of data that is not used during training, but is used to test the classifier Roger S. Gaborski

  21. One Dimensional Gaussian Data

  22. One Dimensional Gaussian Distribution f(x) = (1/ 2 ) exp(- (x-)2 / 22)  is the standard deviation 2 is the variance  is the mean (expected value) Assume our data can be modeled by a Gaussian distribution Roger S. Gaborski

  23. One Feature - Two Classes Class 1 Class 2 Roger S. Gaborski

  24. The distributions on the previous page were determined from the training data • During the testing phase a feature measurement of 52 was obtained • Which class is more likely? • Prob_class1(52) is 0.0415 • Prob_class2(52) is 5.0686e-104 • It is much more likely that the feature was obtained from a class 1 object Roger S. Gaborski

  25. Roger S. Gaborski

  26. Decision Boundary • The decision boundary is the feature measurement that is equally likely to be from either class • Based on the data from the previous slide the boundary is between 120 and 130 • At g = 128 the probability of being in • class1 is 1.8053e-010 • class 2 is 2.7734e-010 • Values greater than 128 will be classified as class 2 • NOTE: We are making the assumption that either class is equally likely Roger S. Gaborski

  27. Overlapping Distributions Class 2 Class 1 Roger S. Gaborski

  28. Consider a feature value of 100 • It is more likely that the measurement is from a class 1 object ( p = 0.0469 ) • BUT It is possible that the measurement is from a class 2 object ( p = 0.0194 ) • Consider a feature value of 105 • Class 1, p = .025 • Class 2, p = .0437 Roger S. Gaborski

  29. Class 2 Class 1 Roger S. Gaborski

  30. Two Dimensional Data SetsRepresenting Two Features

  31. GAUSSA Data Values -5.1835 -0.2436 -3.7354 2.2846 -4.2500 0.9237 -4.7727 0.2746 -2.6909 1.7032 -1.8355 -1.9171 -3.9272 1.4428 -5.7341 1.0020 -2.7673 -1.1168 -3.5304 0.2908 -4.2395 -0.4503 -4.4181 0.7386 -7.1552 0.3653 -4.1241 -2.3290 -3.5944 0.4242 -2.4388 1.0503 -4.2229 -0.8453 -3.0669 -0.0464 -2.3523 -1.4821 -2.8434 -1.9902 Each sample measurement has two values, the first value is the value for feature 1 and the second value is for feature 2 The GAUSSx Data sets have 2000 entries Roger S. Gaborski

  32. GAUSSA Data Roger S. Gaborski

  33. GAUSSB Data Roger S. Gaborski

  34. GAUSSC Data Roger S. Gaborski

  35. GAUSSD Data Roger S. Gaborski

  36. GAUSSE Data Roger S. Gaborski

  37. GAUSSF Data Roger S. Gaborski

  38. GAUSSG Data Roger S. Gaborski

  39. GAUSSH Data Roger S. Gaborski

  40. GAUSSA and GAUSSB Roger S. Gaborski

  41. Model Response GAUSSA and GAUSSB contour3 Roger S. Gaborski

  42. GAUSSF and GAUSSH Roger S. Gaborski

  43. GAUSSD and GAUSSE Roger S. Gaborski

  44. Gaussian Probability Distribution • Need to include all covariance terms unless the covariances of the distributions are identical Roger S. Gaborski

  45. Multivariate Gaussian Distribution Model A point in a feature space is represented by a vector x of M dimensions Assume each of the M features has a Gaussian distribution. Represent probability of a particular feature set measurement as a multivariate Gaussian distribution 1 p(x) = (2)M/2 |  |1/2 * exp( -1/2*(x-)T -1 (x- ) |  | :determinant of covariance matrix -1 :inverse of covariance matrix  :M component mean vector (how many features) T :transpose Roger S. Gaborski

  46. We again would like to find the distribution with the largest probability. We proceed as follows: Take the log of both sides of the equation Gets rid of exponent Lump terms into a constant We than have: const1 = (x-)T -1 (x- ) This equation defines an ellipsoid in M dimensional space Roger S. Gaborski

  47. Ellipsoids of constant probability for data from class 2 Equal probability point Two class problem. Feature vector to the right of the black dashed line belong to class 2 Feature 2 Decision Boundary Feature 1 Ellipsoids of constant probability for data from class 1 Roger S. Gaborski

  48. Bivariate Gaussian Distribution, M=2 • = m1  = s11 s12 m2 s21 s22 -1 = s11 s12 s21 s22 s11s22 – s12s21 Example u = 1  = 1 -.7 || = .51 -1= 1.96 1.37 2 -.7 1 1.37 1.96 Roger S. Gaborski

  49. Bivariate (2 Features) const1 = (x-)T -1 (x- ) x1- 1 -1x1- 1 x2- 2 x2- 2 x1- 1, x2- 2 -1x1- 1 x2- 2 T 1x2 2x2 2x1 Plug in numbers: (x1-1)2 (1.9) + 2.745(x1-1)(x2-2)+1.96(x2-2)2 = const1 ax12 + bx1x2 + cx22 = const where x1 = (x1- 1), etc. Roger S. Gaborski

  50. ax12 + bx1x2 + cx22 = const This is an equation for an ellipse This means the contours of constant probability are ellipses EXAMPLE 2 What if the off diagonal of the covariance matrix = 0  = 0  = 1 0 -1 = 1 0 0 0 1 0 1 [ x1, x2 ] * 1 0 x1 = C 0 1 x2 [x1*1 + x2*0, x1*0+ x2*1] x1 = C = [x1 , x2] x1 =x12+x22 = C x2 x2 Contours of constant probability are circles Roger S. Gaborski

More Related