250 likes | 508 Views
EE 290A: Generalized Principal Component Analysis. Lecture 5: Generalized Principal Component Analysis. Last time. GPCA: Problem definition Segmentation of multiple hyperplanes. Recover subspaces from vanishing polynomial. This Lecture.
E N D
EE 290A: Generalized Principal Component Analysis Lecture 5: Generalized Principal Component Analysis EE 290A, University of California, Berkeley
Last time • GPCA: Problem definition • Segmentation of multiple hyperplanes EE 290A, University of California, Berkeley
Recover subspaces from vanishing polynomial EE 290A, University of California, Berkeley
This Lecture • Segmentation of general subspace arrangements knowing the number of subspaces • Subspace segmentation without knowing the number of subspaces EE 290A, University of California, Berkeley
An Introductory Example EE 290A, University of California, Berkeley
Make use of the vanishing polynomials EE 290A, University of California, Berkeley
Recover Mixture Subspace Models EE 290A, University of California, Berkeley
Question: How to choose one representative point per subspace? (some loose answers) • In noise-free case, randomly pick one. • In noisy case, choose one close to the zero set of vanishing polynomials. (How?) EE 290A, University of California, Berkeley
Summary • Using the vanishing polynomials, GPCA converts a CAE problem to a closed-form solution. EE 290A, University of California, Berkeley
Step 1: Fitting Polynomials • In general, when the dimensions of subspaces are mixed, the set of all K-th degree polynomials that vanish on A becomes more complicated. EE 290A, University of California, Berkeley
Polynomials may be dependent! EE 290A, University of California, Berkeley
With the closed-form solution, even when the sample data are noisy, if K and subspace dimensions are known, a complete list of linearly independent vanishing polynomials can be recovered from the (null space of) embedded data matrix! EE 290A, University of California, Berkeley
Step 2: Polynomial Differentiation EE 290A, University of California, Berkeley
Step 3: Sample Point Selection • Given n sample points from K subspaces, how to choose one point per subspace to evaluate the orthonormal basis for each subspace? • What is the notion of optimality in choosing the best sample when a set of vanishing polynomials is given (for any algebraic set)? EE 290A, University of California, Berkeley
In the case of segmenting hyperplanes? EE 290A, University of California, Berkeley
Draw a random line that does not pass the origin EE 290A, University of California, Berkeley
Lemma 3.9: For general arrangements • We shall choose samples as close to the zero set as possible (in the presence of noise) • One shall avoid choosing points based on P(x), as it is merely an algebraic error, not the geometric distance. • One shall discourage choosing points close to the intersection of two ore more subspaces, even when P(x)=0. EE 290A, University of California, Berkeley
Estimate the Rest (K-1) Subspaces • Polynomial division EE 290A, University of California, Berkeley
GPCA without knowing K or d’s • Determining K and d’s is straightforward when subspaces are of equal dimension • If d is known, project samples to (d+1)-dim space. The problem becomes hyperplane segmentation. • If K is known, project samples to l-dim spaces, while l=1, 2, …, computek-th order Veronese map until it drops rank. • If both K and d are unknown, try all the combinations EE 290A, University of California, Berkeley
GPCA without knowing K or d’s • Determine arrangements of different dimensions 1. If data are noise-free, check the Hilbert function table. EE 290A, University of California, Berkeley
2. When the data are noisy, apply GPCA recursively Please read Section 3.5 for the definition of Effective Dimension EE 290A, University of California, Berkeley