160 likes | 240 Views
Recognition with Expression Variations. Pattern Recognition Theory – Spring 2003 Prof. Vijayakumar Bhagavatula Derek Hoiem Tal Blum. m < N Variables. N Variables. Method Overview. Training Image. (Reduced). Test Image. Dimensionality Reduction. 1-NN Euclidean. Classification. N.
E N D
Recognition with Expression Variations Pattern Recognition Theory – Spring 2003 Prof. Vijayakumar Bhagavatula Derek Hoiem Tal Blum
m < N Variables N Variables Method Overview Training Image (Reduced) Test Image Dimensionality Reduction 1-NN Euclidean Classification
N å = - m - m T S ( x )( x ) T k k = k 1 = = T W arg max W S W [ w w ... w ] opt T 1 2 m W Principal Components Analysis • Minimize representational error in lower dimensional subspace of input • Choose eigenvectors corresponding to m largest eigenvalues of total scatter as the weight matrix
c c å å å = - m - m T = m - m m - m S ( x ) ( x ) T S ( ) ( ) W k i k i B i i = Î w i 1 x = i 1 k i T W S W = = B W arg max [ w w ... w ] opt 1 2 m T W S W W W = l S w S w B i i W i Linear Discriminant Analysis • Maximize the ratio of the between-class scatter to the within-class scatter in lower dimensional space than input • Choose top m eigenvectors of generalized eigenvalue solution
= W W W opt LDA PCA LDA: Avoiding Singularity • For N samples and c classes: • Reduce dimensionality to N - c using PCA • Apply LDA to reduced space • Combine weight matrices
= W W W opt LDA PCA Discriminant Analysis of Principal Components • For N samples and c classes: • Reduce dimensionality m < N - c • Apply LDA to reduced space • Combine weight matrices
When PCA+LDA Can Help • Test includes subjects not present in training set • Very few (1-3) examples available per class • Test samples vary significantly from training samples
Why Would PCA+LDA Help? • Allows more freedom of movement for maximizing between-class scatter • Removes potentially noisy low-ranked principal components in determining LDA projection • Goal is improved generalization to non-training samples
PCA ProjectionsBest 2-D Projection Training Testing
LDA ProjectionsBest 2-D Projection Training Testing
PCA+LDA ProjectionsBest 2-D Projection Training Testing
Processing Time • Training time: < 3 seconds (Matlab 1.8 GHz) • Testing time: O( d * (N + T) )
Conclusions • Recognition under varying expressions is an easy problem • LDA and LDA+PCA produce better subspaces for discrimination than PCA • Simply removing lowest ranked PCA vectors may not be good strategy for PCA+LDA • Maximizing the minimum between-class distance may be a better strategy than maximizing the Fisher ratio
References • M. Turk and A. Pentland, “Face recognition using eigenfaces,” in Proc. IEEE Conf. on Comp. Vision and Patt. Recog., pages 586-591, 1991 • P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” in Proc. European Conf. on Computer Vision, April 1996 • W. Zhao, R. Chellappa, and P.J. Phillips, “Discriminant Analysis of Principal Components for Face Recognition,” in Proceedings, International Conference on Automatic Face and Gesture Recognition, pp. 336-341, 1998