1 / 10

Learning the parts of objects by nonnegative matrix factorization

Learning the parts of objects by nonnegative matrix factorization. D.D. Lee from Bell Lab H.S. Seung from MIT Presenter: Zhipeng Zhao. Introduction. NMF (Nonnegative Matrix Factorization): Theory: Perception of the whole is based on perception of its parts.

kay
Download Presentation

Learning the parts of objects by nonnegative matrix factorization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning the parts of objects by nonnegative matrix factorization D.D. Lee from Bell Lab H.S. Seung from MIT Presenter: Zhipeng Zhao

  2. Introduction • NMF (Nonnegative Matrix Factorization): Theory: Perception of the whole is based on perception of its parts. • Comparison with another two matrix factorization methods: PCA (Principle Components Analysis) VA (Vector quantization )

  3. Comparison: • Common features: • Represent a face as a linear combination of basis images. • Matrix factorization: VWH V: nm matrix. Each column of which contains n nonnegative pixel values of one of the m facial images. W: (n r): r columns of W are called basis images. H: (r m): each column of H is called encoding.

  4. Comparison (cont’d) NMF PCA VQ Representation: parts- Based holistic holistic Basis Image: localized features eigenfaces whole face Constrains on allow multiple each face is each column of H is W and H: basis images to approximated by constrained to be a represent a face, a linear combi- unary vector, every but only additive nation of all face is approximat- combinations the eigenfaces ed by a single basis image.

  5. Implementation of NMF • Iterative algorithm:

  6. Implementation (cont’d) • Objective function: Updates: converges to a local maximum of the objective function. ( related to the likelihood of generating the images in V from the basis W and encoding H.

  7. Network model of NMF

  8. Semantic analysis of text doc. using NMF • A corpus of documents summarized by matrix V, where Vi is the number of times the ith word in the vocabulary appears in the th document. • NMF algorithm involves finding the approximate factorization of this Matrix VWH into a feature set W and hidden variables H, in the same way as was done for faces.

  9. Semantic analysis of text doc. using NMF (cont’d) • VQ: A single hidden variable is active for each document. If the same variable is active for a group of documents, they are semantically related. • PCA: allow activation of multiple semantic variables, but they are difficult to interpret. • NMF: It makes sense for each document to associate with some small subset of a large array of topics.

  10. Limitation of NMF • Not suitable for learning parts for complex cases:require fully hierarchical models with multiple levels of hidden variables. • NMF does not learn anything about the “syntactic” relationships between parts: NMF assumes that the hidden variables are nonnegative, but makes no assumption about their statistical dependency.

More Related