1 / 42

Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing

Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing. Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad. Clustering documents (and terms). Latent Semantic Indexing Term-document matrices are very large

Download Presentation

Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Text CategorizationMoshe KoppelLecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad

  2. Clustering documents (and terms) • Latent Semantic Indexing • Term-document matrices are very large • But the number of topics that people talk about is small (in some sense) • Clothes, movies, politics, … • Can we represent the term-document space by a lower dimensional latent space?

  3. Term-Document Matrix • Represent each document as a numerical vector in the usual way. • Align the vectors to form a matrix. Note that this is not a square matrix.

  4. Term-Document Matrix • Represent each document as a numerical vector in the usual way. • Align the vectors to form a matrix. Note that this is not a square matrix. In a perfect world, the term-doc matrix might look like this:

  5. Intuition from block matrices N documents Block 1 What’s the rank of this matrix? Block 2 0’s M terms … 0’s Block k = Homogeneous non-zero blocks.

  6. Intuition from block matrices N documents Block 1 Block 2 0’s M terms … 0’s Block k Vocabulary partitioned into k topics (clusters); each doc discusses only one topic.

  7. Intuition from block matrices Likely there’s a good rank-k approximation to this matrix. wiper Block 1 tire V6 Block 2 Few nonzero entries … Few nonzero entries Block k car 0 1 automobile 1 0

  8. Dimension Reduction and Synonymy • Dimensionality reduction forces us to omit “details”. • We have to map different words (= different dimensions of the full space) to the same dimension in the reduced space. • The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words. • We’ll select the “least costly” mapping. • Thus, will map synonyms to the same dimension. • But, will avoid doing that for unrelated words. 8

  9. Formal Objectives • Given a term-doc matrix, M, we want to find a matrix M’ that is “similar” to M but of rank k (where k is much smaller than the rank of M). • So we need some formal measure of “similarity” between two matrices. • And we need an algorithm for finding the matrix M’. • Conveniently, there are some neat linear algebra tricks for this. So, let’s review a bit of linear algebra.

  10. Example only has a non-zero solution if this is a m-th order equation in λwhich can have at most m distinct solutions(roots of the characteristic polynomial) – can be complex even though S is real. Eigenvalues & Eigenvectors • Eigenvectors (for a square mm matrix S) • How many eigenvalues are there at most? (right) eigenvector eigenvalue

  11. For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real. Useful Facts about Eigenvalues & Eigenvectors

  12. Example • Let • Then • The eigenvalues are 1 and 3 (nonnegative, real). • The eigenvectors are orthogonal (and real): Real, symmetric. Plug in these values and solve for eigenvectors.

  13. diagonal Eigen/diagonal Decomposition • Let be a squarematrix with mlinearly independent eigenvectors (a “non-defective” matrix) • Theorem: Exists an eigen decomposition • (cf. matrix diagonalization theorem) • Columns of U are eigenvectors of S • Diagonal elements of are eigenvalues of Unique for distinct eigen-values L18LSI

  14. Let U have the eigenvectors as columns: Then, SU can be written Diagonal decomposition: why/how Thus SU=U, or U–1SU= And S=UU–1.

  15. Key Point So Far • We can decompose a square matrix into a product of matrices one of which is an eigenvalue diagonal matrix. • But we’d like to say more: when the square matrix is also symmetric, we have a better theorem. • Note that even that isn’t our ultimate destination, since the term-doc matrices we deal with aren’t even square matrices. One step at a time…

  16. Symmetric Eigen Decomposition • If is a symmetric matrix: • Theorem: There exists a (unique) eigen decomposition • where: • Q-1= QT • Columns of Q are normalized eigenvectors • Columns are orthogonal. • (everything is real)

  17. Now… • Let’s find some analogous theorem for non-square matrices.

  18. MM MN V is NN Eigenvalues 1 … r of AAT are the eigenvalues of ATA. Singular values. Singular Value Decomposition For an M N matrix Aof rank rthere exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AAT. The columns of V are orthogonal eigenvectors of ATA.

  19. Eigen Decomposition and SVD • Note that AAT and ATAare symmetric square matrices. • AAT = UVTVUT = U2UT • That’s just the usual eigen decomposition for a symmetric square matrix. • AAT and ATAhave special relevance for us. aij represents the dot-product similarity of row (column) i with row (column) j. (For docs, it’s the number of common terms; for terms, the number of common docs.)

  20. Singular Value Decomposition • Illustration of SVD dimensions and sparseness L18LSI

  21. Example of A = UΣVT : The matrix A This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example. 21

  22. Example of A = UΣVT : The matrix U One row per term, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions (columns) as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number uij in the matrix indicates how strongly related term iis to the topic represented by semantic dimensionj . 22

  23. Example of A = UΣVT : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of A. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions. 23

  24. Example of A = UΣVT : The matrix VT Onecolumn per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number vij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j . 24

  25. Example of A = UΣVT : All four matrices 25

  26. LSI: Summary • We’ve decomposed the term-document matrix A into a productofthreematrices. • The term matrix U – consists of one (row) vector for each term • The document matrix VT – consists of one (column) vector foreachdocument • The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension 26

  27. Frobenius norm Low-rank Approximation • SVD can be used to compute optimal low-rank approximations. • Approximation problem: Find Akof rank k such that Ak and X are both mn matrices. Typically, want k << r.

  28. k Low-rank Approximation • Solution via SVD set smallest r-k singular values to zero

  29. k Reduced SVD • If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in red • Then Σ is k×k, U is M×k, VT is k×N, and Ak is M×N • This is referred to as the reduced SVD • It is the convenient (space-saving) and usual form for computational applications

  30. Approximation error • How good (bad) is this approximation? • It’s the best possible, measured by the Frobenius norm of the error: where the i are ordered such that i  i+1. • Suggests why Frobenius error drops as k increases.

  31. SVD low-rank approx. of term-doc matrices • Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000) • For example, we can construct an approximation A100 with rank 100. • Of all rank 100 matrices, it would have the lowest Frobenius error. • We can think of it as clustering our docs (or our terms) to 100 clusters. • The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space.

  32. Latent Semantic Indexing (LSI) • Perform a low-rank approximation of document-term matrix (typical rank 100-300) • General idea • Map documents (and terms) to a low-dimensional representation. • The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space

  33. Some wild extrapolation • The “dimensionality” of a corpus is the number of distinct topics represented in it. • More mathematical wild extrapolation: • if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. L18LSI

  34. Recall unreduced decomposition A=UΣVT 34

  35. Reducing the dimensionality to 2 35

  36. Reducing the dimensionality to 2 • Actually, we • onlyzero out • singularvalues • in Σ. Thishas • theeffectof • settingthe • corresponding • dimensions in • U andV Tto • zerowhen • computingthe • product • A = UΣV T . 36

  37. Original matrixA vs. reducedA2 = UΣ2VT Wecanview A2 as a two-dimensional representation of the matrix. Wehave performed a dimensionality reductionto two dimensions. 37

  38. Whyisthereducedmatrix “better” Similarityof d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 * 0.28 + 0.36 * 0.16 + 0.72 * 0.36 + 0.12 * 0.20 + - 0.39 * - 0.08 ≈ 0.52 38

  39. Whythereducedmatrixis “better” “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this. 39

  40. Toy Illustration • Latent semantic space: illustrating example courtesy of Susan Dumais

  41. LSI has many applications • The general idea is quite standard linear algebra. • It’s original application in comp ling was information retrieval (Deerwester, Dumais et al). • In IR it overcomes two problems: polysemy and synonymy. • In fact, it is rarely used in IR because most IR problems involve huge corpora and SVD algorithms aren’t efficient enough for use on such large corpora.

  42. Extensions • Subsequent work (Hoffman) extended LSI to probabilistic LSI. • That was further extended (Blei, Ng & Jordan) to Latent Dirichlet Analysis (LDA).

More Related