1 / 67

Chapter 3 cont’d.

Chapter 3 cont’d. Adjacency, Histograms, & Thresholding. RAGs (Region Adjacency Graphs). Define graph. G=(V,E) where V is a set of vertices or nodes and E is a set of edges. Define graph. G=(V,E) where V is a set of vertices or nodes and E is a set of edges

bebe
Download Presentation

Chapter 3 cont’d.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 cont’d. Adjacency, Histograms, & Thresholding

  2. RAGs(Region Adjacency Graphs)

  3. Define graph • G=(V,E) • where V is a set of vertices or nodes • and E is a set of edges

  4. Define graph • G=(V,E) • where V is a set of vertices or nodes • and E is a set of edges • represented by either unordered or ordered pairs of vertices

  5. RAGs (Region Adjacency Graphs) Steps: • label image • scan and enter adjacencies in graph (RAGs also represent containment.)

  6. 0 (background) 2 1 3 -1 -3 -2 (Personally, I’d draw it this way!)

  7. Define degree of a node. What is special about nodes with degree 1?

  8. But how do we obtain binary images (from gray or color images)?

  9. Histograms & Thresholding

  10. Gray to binary Thresholding • G  B constint t = 200; if (G[r][c] > t) B[r][c] = 1; else B[r][c] = 0; How do we choose t? • interactively • automatically

  11. Gray to binary • Interactively. How? • Automatically. • Many, many, many, …, many methods. • Experimentally (using a priori information). • Supervised / training methods. • Unsupervised • Otsu’s method (among many, many, many, many, … other methods).

  12. Histogram • “Probability” of a given gray value in an image. • h(g) = count of pixels w/ gray value equal to g. • p(g) = h(g) / (w*h) • w*h = # of pixels in entire image • What are the range of possible values for p(g)?

  13. Histogram • “Probability” of a given gray value in an image. • h(g) = count of pixels w/ gray value equal to g. What data type is used for counts? • p(g) = h(g) / (w*h) • w*h = # of pixels in entire image • What are the range of possible values for p(g)? So what data type is p(g)? What happens when h(g) is divided by w*h?

  14. Histogram Note: Sometimes we need to group gray values together in our histogram into “bins” or “buckets.” E.g., we have 10 bins in our histogram and 100 possible different gray values. So we put 0..9 into bin 0, 10..19 into bin 1, …

  15. Histogram

  16. Something is missing here!

  17. Example of histogram

  18. Example of histogram We can even analyze the histogram just as we analyze images. One common measure is entropy:

  19. Entropy • Ice melting in a warm room is a common example of “entropy increasing”, described in 1862 by Rudolf Clausius as an increase in the disgregation of the molecules of the body of ice. • from http://en.wikipedia.org/wiki/Entropy

  20. Entropy “My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.” – Conversation between Claude Shannon and John von Neumann regarding what name to give to the “measure of uncertainty” or attenuation in phone-line signals (1949)

  21. Example of histogram We can even analyze the histogram just as we analyze images! One common measure is entropy:

  22. Calculating entropy Notes: • p(k) is in [0,1] • If p(k)=0 then don’t calculate log(p(k)). Why? • My calculator only has log base 10. How do I calculate log base 2? • Why ‘-’ to the left of the summation?

  23. Let’s calculate some histogram entropy values. Say we have 3 bits per gray value. • So our histogram has 8 bins. • Calculate the entropy for the following histograms (image size is 10x10): • 99 0 0 0 0 0 01 • 99 0 1 0 0 0 0 0 • 20 20 10 10 10 10 10 10 • 50 0 0 50 0 0 0 0 • 25 0 25 0 25 0 25 0 0.08 0.08 2.92 1.00 2.00 most disorder

  24. Example histograms Same subject but different images and histograms (because of a difference in contrast).

  25. Example of different thresholds

  26. So how can we determine the threshold value automatically?

  27. Example automatic thresholding methods • Otsu’s method • K-means clustering

  28. Otsu’s method

  29. Otsu’s method • Automatic thresholding method • automatically picks “best” threshold t given an image histogram • Assumes 2 groups are present in the image: • Those that are <= t. • Those that are > t.

  30. Otsu’s method Best choices for t.

  31. Otsu’s method For every possible t: • Calculate within group variances: • probability of being in group 1; probability of being in group 2 • determine mean of group 1; determine mean of group 2 • calculate variance for group 1; calculate variance for group 2 • calculate weighted sum of group variances • Remember which t gave rise to minimum.

  32. Otsu’s method:probability of being in each group

  33. Otsu’s method:mean of individual groups

  34. Otsu’s method:variance of individual groups

  35. Otsu’s method:weighted sum of group variances • Calculate for all t’s and minimize. • Demo Otsu.

  36. Demo of Otsu’s method before

  37. Demo of Otsu’s method Otsu’s report

  38. Demo of Otsu’s method Otsu’s threshold

  39. Generalized thresholding

  40. Generalized thresholding • Single range of gray values const int t1 = 200; const int t2 = 500; if (G[r][c] > t1 && G[r][c] < t2) B[r][c] = 1; else B[r][c] = 0;

  41. Even more general thresholding • Union of ranges of gray values. const int t1 = 200, t2 = 500; const int t3 =1200, t4 =1500; if (G[r][c] > t1 && G[r][c] < t2) B[r][c] = 1; else if (G[r][c] > t3 && G[r][c] < t4) B[r][c] = 1; else B[r][c] = 0;

  42. Something is missing here!

  43. K-means clustering

  44. K-Means Clustering • In statistics and machine learning, k-means clustering is a method of cluster analysis which aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data. • from wikipedia

  45. K-Means Clustering • Clustering = the process of partitioning a set of pattern vectors into subsets called clusters. • K = number of clusters (must be known in advance). • Not an exhaustive search so it may not find the globally optimal solution. • (see section 10.1.1)

  46. Iterative K-Means Clustering Algorithm Form K-means clusters from a set of nD feature vectors. • Set ic=1 (iteration count). • Choose randomly a set of K means m1(1), m2(1), … mK(1). • For each vector xi compute D(xi,mj(ic)) for each j=1,…,K. • Assign xi to the cluster Cj with the nearest mean. • ic =ic+1; update the means to get a new set m1(ic), m2(ic), … mK(ic). • Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.

  47. K-Means Clustering Example 0. Let K=3. 1. Randomly (may not necessarily be actual data points) choose 3 means (i.e., cluster centers). - figure from wikipedia

  48. K-Means Clustering Example 1. Randomly (may not necessarily be actual data points) choose 3 means (i.e., cluster centers). 2. Assign each point to nearest cluster center (mean). - figure from wikipedia

  49. K-Means Clustering Example 2. Assign each point to nearest cluster center (mean). 3. Calculate centroid of each cluster center (mean). These will become the new centers. - figure from wikipedia

More Related