1 / 30

Preliminary Exam

Department of Electrical and Computer Engineering. Hierarchical Dirichlet Processes and Infinite HMMs. Submitted to: Dr. Joseph Picone, Examining Committee Chair Dr. Iyad Obeid, Committee Member, Depat . of Electrical and Computer Engineering

toviel
Download Presentation

Preliminary Exam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Department of Electrical and Computer Engineering Hierarchical Dirichlet ProcessesandInfinite HMMs Submitted to: Dr. Joseph Picone, Examining Committee Chair Dr. Iyad Obeid, Committee Member, Depat. of Electrical and Computer Engineering Dr. Marc Sobel, Committee Member, Department of Statistics Dr. Chang-Hee Won, Committee Member, Depat. of Electrical and Computer Engineering Dr. Slobodan Vucetic, Committee Member, Dept. of Computer and Information Sciences March 6, 2012 prepared by: Amir Harati, PhD Candidate PhD Advisor: Dr. Joseph Picone, Professor and Chair Department of Electrical and Computer Engineering Temple University , College of Engineering 1947 North 12th Street Philadelphia, Pennsylvania 19122 Tel: 215-204-7597 Email: amir.harati@gmail.com Preliminary Exam

  2. Motivation • Parametric models can capture a bounded amount of information from the data. • Real data is complex and therefore parametric assumptions is wrong. • Nonparametric models can lead to model selection/averaging solutions without paying the cost of these methods. • In addition Bayesian methods often provide a mathematically well defined framework, with better extendibility. All possible data sets of size n From [1]

  3. Motivation InputSpeech • Speech recognizer architecture. • Performance of the system depends on the quality of acoustic models. • HMMs and mixture models are frequently used for acoustic modeling. • Number of models and parameter sharing is among the most important model selection problems in speech recognizer. • Can hierarchical nonparametric Bayesian modeling help us? AcousticFront-end Acoustic ModelsP(A/W) Language ModelP(W) Search Recognized Utterance

  4. Outline • Background • Hierarchical Dirichlet Process. • Posterior Sampling in CRF. • Augmented Posterior Representation Sampler. • HDP-HMM • Direct Assignment Sampler. • Block Sampler. • Sequential Sampler. • Demonstrations. • Future works and Discussion.

  5. Background • is a measurable space where is the sigma algebra • A measure over is a function from such that: • For a probability measure • A Dirichlet distribution is a distribution over the K-dimensional probability simplex. • Examples of Dirichlet distributions From [2]

  6. Background • A Dirichlet Process (DP) is a random probability measure over such that for any measurable partition over we have: • And we write: • DP is discrete with probability one: • is the base distribution and acts like mean of DP. Is the concentration parameter and is proportional to the inverse of the variance. • Stick-breaking construction: • Polya urn scheme: • Chinese restaurant process (CRP) :

  7. Hierarchical Dirichlet Process (HDP) • Grouped data clustering problem: consider topic modeling problem. In this problem, each document is a group and we are interested to model each document with a mixture while sharing mixtures across the groups. • For each group we need a DP. We use a hierarchical architecture to share clusters across the groups. • Sharing of atoms obtained by using a common DP as the based distribution for each group. From [3,4]

  8. HDP • Stick-breaking construction From [5]

  9. HDP Chinese Restaurant Franchise (CRF) • Each group is corresponding to a restaurant. • There is a franchise wide menu with unbounded number of entries. • Number of dishes is logarithmically proportional to the number of tables and double logarithmically to the number of data. • Reinforcement effect: New customers tends to sit at tables with many other customers and choose dishes that are chosen by many other tables. From [6]

  10. HDP • Posterior Distribution Interpretation: At the beginning is large and therefore is large and is concentrated around . After many tables become occupied gets smaller and as result becomes smaller but will not be concentrated around . => NEW DRAWS ARE NOT LIKELY BUT IF THEY HAPPENS, THEY WOULD BE DIFFERENT FROM THE AVERAGE.

  11. Posterior sampling in CRF • Sampling table assignment (t): Given foods and table labels sample • If a new table is in selected, sample its food: • Sample foods (k): • For exponential family we can just update the cached statistics . For Gaussian emissions, we can calculate the likelihoods using :

  12. Posterior Representation Sampler • Sample Z: • if a new component is chosen : • Sample m : Antoniak showed that if then the distribution of unique draws from has this form: Where s(N,K) is the Stirling number of first kind. • Alternatively we can simulate a CRF: For each set and n=0 . For each customer in restaurant j eating dish k sample: Increment n and if x=1 increment • Sample and

  13. Topic Modeling [3,4]

  14. Hidden Markov models (HMMs) • HMMs are a dynamic variant of mixture models. • An HMM can be characterized by transition and emission matrices. • Number of states and number of mixtures should be specified a priori. Topology is also fixed. • Infinite HMMs : an HMM with unbounded number of states and mixtures per state. • For each state we have to replace the transition matrix with a DP. • DPs should be linked to make state sharing possible. • HDP is used to tie state transition distributions . • Each state can independently use another DP to model an unbounded emission mixture. • Original HDP-HMM suffering from lack of state persistence. This problem is solved by adding a sticky parameter.

  15. HDP-HMM • Definition: • CRF with loyal customers: Each restaurant has a special dish which is also served in other restaurants. If a customer eats the specialty dish (likely) then his children goes to the same restaurant and likely eat the same dish. However, if the customer eats another dish then his children go to the restaurant indexed by that dish and more likely eat their specialty dish. From [7]

  16. Direct Assignment Sampler • Sample augmented state: • Sampling by sampling first auxiliary variables: • Sample m using • Alternatively, we can simulate a CRF. • Sample override variable: • Adjust the number of informative tables: • Sample

  17. Some Notes • Sampling from the override variable is performed to cancel the bias introduced by sticky parameter. Sticky parameter practically change (override) the dish which is going to assigned to the table. In order to have an unbiased estimate we have to bring this into account. • Direct assignment sampler suffer from slow convergence rates. • Parameters are integrate d out , in other word this sampler can only used for inference not learning. • If we want to perform learning we have to sample parameters by simulation (more computation). • We need to sample all states at once. • We are interested to do both of learning and inference at once.

  18. Forward-Backward Probabilities • Joint probability of state and mixture component can be wrote as: • Forward probabilities include: • Backward probability is: • In this work, forward probabilities are approximated by • For backward probabilities we can write: • And finally we have:

  19. Block Sampler • Compute the backward probabilities: • Sample augmented state: • Sample override variable and adjust number of tables similar to the pervious algorithm. • Update the cache and then sample • Sample and • Sample • Optionally sample hyper-parameters.

  20. Particle Filter • Dynamic system • Update : • Propagate :

  21. Sequential Learning and Inference • Calculate the weights: • Resample the particles: • Propagate the particles: • If a new state is initiated : • Update the hyper-parameter. • Resample

  22. State persistence demo [7]

  23. Fast switching Demo [7]

  24. Comparison to Sparse Dirichlet Prior [7]

  25. Speaker Diarization [7]

  26. Speaker Diarization [7]

  27. Alice in the wonder land [1,3] • Training over 1000 character. • Test over another 1000 character. • output : Characters (including space and punctuation.)

  28. Future works • Can we use a similar approach to speaker diarization to discover a new set of acoustic units (instead of phonemes) ? This problem seems to fit particularly well in a non parametric settings since the number of units is not know a priori and should be estimated from the data. The only important difficulty is to form a dictionary for new units. • How can we define a structured HDP-HMM (e.g. Left-right ) without violating Bayesian framework (no heuristic)? • From experiments, we know speaker dependent models works significantly better than speaker independent models. For example, the performance of a speech recognizer with gender based models is better than performance of a speech recognizer with universal models for all speakers. Nonparametric Bayesian framework provides two important features that can facilitate speaker dependent systems: 1-Number of clusters of speakers is not known a priori and could possibly grow with obtaining new data. 2-Paramter sharing and model (and state )tying can be accomplished elegantly using proper hierarchies. Depending on the available training data, the system would have different number of models for different acoustic units. All acoustic units are tied. Moreover each model has different number of sates and different number of mixtures for each state.

  29. References • Ghahramani, Z. (2010). Bayesian Hidden Markov Models and Extensions. Uppsala, Sweden: invited talk at CoNLL. • Ghahramani, Z. (2005) ,Tutorial on Nonparametric Bayesian Methods, talk UAI • Teh, Y., Jordan, M., Beal, M., & Blei, D. (2004). Hierarchical Dirichlet Processes. Technical Report 653 UC Berkeley. • Teh, Y., & Jordan, M. (2010). Hierarchical Bayesian Nonparametric Models with Applications. In N. Hjort, C. Holmes, P. Mueller, & S. Walker, Bayesian Nonparametrics: Principles and Practice. Cambridge, UK: Cambridge University Press • Y.W. Teh. (2009), Bayesian Nonparametrics, talk MLSS Cambridge •  M. I. Jordan (2005), Dirichlet processes, Chinese restaurant processes and all that, Tutorial presentation at the NIPS Conference • Fox, E., Sudderth, E., Jordan, M., & Willsky, A. (2011). A Sticky HDP-HMM with Application to Speaker Diarization. The Annalas of Applied Statistics, 5, 1020-1056. • Rodriguez, A. (2011, July). On-Line Learning for the Infinite Hidden Markov. Communications in Statistics: Simulation and Computation, 40(6), 879-893.

  30. Thank You!

More Related