190 likes | 335 Views
Digital Image Processing Lecture 25: Object Recognition. Prof. Charlene Tsai. Review. Matching Specified by the mean vector of each class Optimum statistical classifiers Probabilistic approach Bayes classifier for Gaussian pattern classes
E N D
Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai
Review • Matching • Specified by the mean vector of each class • Optimum statistical classifiers • Probabilistic approach • Bayes classifier for Gaussian pattern classes • Specified by mean vector and covariance matrix of each class • Neural network
Loss incurred if x actually came from , but assigned to Foundation • Probability that x comes from class is • Average loss/risk incurred in assigning x to • Using basic probability theory, we get p(A/B)p(B)=p(B/A)p(A)
(con’d) • Because 1/p(x) is positive and common to all rj(x), so it can be dropped w/o affecting the comparison among rj(x) • The classifier assigns x to the class with the smallest average loss --- Bayes classifier Eqn#1
The Loss Function (Lij) • 0 loss for correct decision, and same nonzero value (say 1) for any incorrect decision. where Eqn#2
Bayes Classifier • Substituting eqn#2 into eqn#1 yields • The classifier assigns x to class if for all p(x) is common to all classes, so is dropped
Decision Function • Using Bayes classifier for a 0-1 loss function, the decision function for is • Now the questions are • How to get ? • How to estimate ?
Using Gaussian Distribution • Most prevalent form (assumed) for is the Gaussian probability density function. • Now consider a 1D problem with 2 pattern classes (W=2) mean variance
Example Where is the decision if 1. 2. 3.
N-D Gaussian • For jth pattern class, where, Remember this from Principle component Analysis?
(con’t) • Working with the logarithm of the decision function: • If all covariance matrices are equal, then Common covariance
For C=I • If C=I (identity matrix) and is 1/W, we get which is the minimum distance classifier • Gaussian pattern classes satisfying these conditions are spherical clouds of identical shape in N-D.
Example in Gonzalez (pg709) Decision boundary
Dropping , which is common to all classes (con’t) • Assuming • We get • The decision surface is
Neural Network • Simulating the brain activity in which the elemental computing elements are treated as the neurons. • The trend of research dates back to early 1940s. • The perceptron learn a linear decision function that separate 2 training sets.
(con’t) • The coefficients wi are the weights, which are analogous to synapses in the human neural system. • When d(x)>0, the output is +1, and the x pattern belongs to . The reverse is true when d(x)<0. • This is as far as we go. • This concept has be adopted in many real systems, when the underlying distributions are unknown.