400 likes | 606 Views
EE4-62 MLCV. Lecture 13-14 Face Recognition – Subspace/Manifold Learning . Tae-Kyun Kim. EE4-62 MLCV. Face Image Tagging and Retrieval. Face tagging at commercial weblogs Key issues User interaction for face tags Representation of a long- time accumulated data
E N D
EE4-62 MLCV Lecture 13-14Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim
EE4-62 MLCV Face Image Tagging and Retrieval • Face tagging at commercial weblogs • Key issues • User interaction for face tags • Representation of a long- time accumulated data • Online and efficient learning • Active research area in Face Recognition Test and MPEG-7 for face image retrieval and automatic passport control • Our proposal promoted to MPEG7 ISO/IEC standard
Principal Component Analysis (PCA)- Maximum Variance Formulation of PCA- Minimum-error formulation of PCA- Probabilistic PCA
EE4-62 MLCV (Recap) Geometrical interpretation of PCA • Principal components are the vectors in the direction of the maximum variance of the projection samples. • For given 2D data points, u1 and u2 are found as PCs • Each two-dimensional data point is transformed to a single variable z1 representing the projection of the data point onto the eigenvector u1. • The data points projected onto u1 has the max variance. • Infer the inherent structure of high dimensional data. • The intrinsic dimensionality of data is much smaller.
Eigenfaces • Collect a set of face images • Normalize for scale, orientation (using eye locations) • Construct the covariance matrix and obtain eigenvectors D=wh w h
EE4-62 MLCV Eigenfaces • Project data onto the subspace • Reconstruction is obtained as • Use the distance to the subspace for face recognition
Face Images • Eigen-vectors and Eigen-value plot • Face image reconstruction • Projection coefficients (visualisation of high-dimensional data) • Face recognition
EE4-62 MLCV Probabilistic PCA • A subspace is spanned by the orthonormal basis (eigenvectors computed from covariance matrix) • Can interpret each observation with a generative model • Estimate (approximately) the probability of generating each observation with Gaussian distribution, PCA: uniform prior on the subspace PPCA: Gaussian dist.
EE4-62 MLCV Probabilistic PCA
Unsupervised learning PCA vs LDA (Linear Discriminant Analysis)
EE4-62 MLCV Linear model Linear Manifold = Subspace Nonlinear Manifold PCA vs Kernel PCA
Gaussian Distribution Assumption IC1 PC2 IC2 PC1 PCA vs ICA (Independent Component Analysis)
EE4-62 MLCV (also by ICA)