1 / 40

Articulatory Feature-Based Speech Recognition

A summary of a collaborative project focused on articulatory feature-based speech recognition, discussing the goals, history, challenges, tools used, and future directions explored by the team members and advisors. The project aims to improve ASR performance and modeling of co-articulation, with potential applications in audio-visual and multilingual ASR fields. By combining AF classifiers and phone-based recognizers, the team explores the benefits and design issues of complete AF-based recognizers. The presentation includes comparisons of observation and pronunciation models, as well as analysis of articulatory phenomena and AF-based audio-visual speech recognition.

dthome
Download Presentation

Articulatory Feature-Based Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. word word ind1 ind1 U1 U1 sync1,2 sync1,2 S1 S1 ind2 ind2 U2 U2 sync2,3 sync2,3 S2 S2 ind3 ind3 U3 U3 S3 S3 JHU WS06 Final team presentationAugust 17, 2006 Articulatory Feature-Based Speech Recognition

  2. Project Participants Team members: Karen Livescu (MIT) Arthur Kantor (UIUC) Ozgur Cetin (ICSI Berkeley) Partha Lal (Edinburgh) Mark Hasegawa-Johnson (UIUC) Lisa Yung (JHU) Simon King (Edinburgh) Ari Bezman (Dartmouth) Nash Borges (DoD, JHU) Stephen Dawson-Haggerty (Harvard) Chris Bartels (UW) Bronwyn Woods (Swarthmore) Satellite members/advisors: Jeff Bilmes (UW), Nancy Chen (MIT), Xuemin Chi (MIT), Ghinwa Choueiter (MIT), Trevor Darrell (MIT), Edward Flemming (MIT), Eric Fosler-Lussier (OSU), Joe Frankel (Edinburgh/ICSI), Jim Glass (MIT), Katrin Kirchhoff (UW), Lisa Lavoie (Elizacorp, Emerson), Mathew Magimai (ICSI), Erik McDermott (NTT), Daryush Mehta (MIT), Florian Metze (Deutsche Telekom), Kate Saenko (MIT), Janet Slifka (MIT), Stefanie Shattuck-Hufnagel (MIT), Amar Subramanya (UW)

  3. Why are we here? • Why articulatory feature-based ASR? • Improved modeling of co-articulation • Potential application to audio-visual and multilingual ASR • Improved ASR performance with feature-based observation models in some conditions • Potential savings in training data • Compatibility with more recent theories of phonology (autosegmental phonology, articulatory phonology) • Why now? • A number of sites working on complementary aspects of this idea, e.g. • U. Edinburgh (King et al.) • UIUC (Hasegawa-Johnson et al.) • MIT (Livescu, Saenko, Glass, Darrell) • Recently developed tools (e.g. GMTK) for systematic exploration of the model space

  4. A brief history • Many have argued for replacing the single phone stream with multiple sub-phonetic feature streams (Rose et al. ‘95, Ostendorf ‘99, ‘00, Nock ‘00, ‘02, Niyogi et al. ‘99 (for AVSR)) • Many have worked on parts of the problem • AF classification/recognition (Kirchhoff, King, Frankel, Wester, Richmond, Hasegawa-Johnson, Borys, Metze, Fosler-Lussier, Greenberg, Chang, Saenko, ...) • Pronunciation modeling (Livescu & Glass, Bates) • Many have combined AF classifiers with phone-based recognizers(Kirchhoff, King, Metze, Soltau, ...) • Some have built HMMs by combining AF states into product states (Deng et al., Richardson and Bilmes) • Only very recently, work has begun on end-to-end recognition with multiple streams of AF states (Hasegawa-Johnson et al. ‘04, Livescu ‘05) • No prior work on AF-based models for AVSR

  5. Yes No factored obs model? state asynchrony cross-word soft asynchrony soft asynchrony within word free within unit coupled state transitions Yes No [Livescu ‘04] [Deng ’97, Richardson ’00] fact. obs? fact. obs? fact. obs? fact. obs? obs model GM SVM NN N N N N Y Y Y Y [Metze ’02] [Kirchhoff ’02] [Juneja ’04] CD CD CD CD CD CD CD CD N N N Y N Y Y Y N Y N [Livescu ’05] N FHMMs ??? ??? Y N Y Y [WS04] [Kirchhoff ’96, Wester et al. ‘04] CHMMs ??? ??? ??? ??? ??? ??? ??? A (partial) taxonomy of design issues factored state (multistream structure)? (Not to mention choice of feature sets... same in hidden structure and observation model?)

  6. P(w) language model w = “makes sense...” pronunciation model P(q|w) s = [ m m m ey1 ey1 ey2 k1 k1 k1 k2 k2 s ... ] observation model P(o|q) o = Definitions: Pronunciation and observation modeling

  7. Project goals Building complete AF-based recognizers and understanding the design issues involved A world of areas to explore... • Comparisons of Observation models: Gaussian mixtures over acoustic features, hybrid models [Morgan & Bourlard 1995], tandem models [Ellis et al. 2001] Pronunciation models: Articulatory asynchrony and reduction models • Analysis of articulatory phenomena: Dependence on context, speaker, speaking rate, speaking style, ... • Application of AFSR to audio-visual speech recognition • All require some resources... Feature sets Manual and automatic AF alignments Tools

  8. That was the vision... What we focused on at WS06 • Comparisons of AF-based observation models in the context of phone-based recognizers • Comparisons of AF-based pronunciation models using Gaussian mixture-based observation models • AF-based audio-visual speech recognition • Resources Feature sets Manual AF alignments Tools: tying, visualization

  9. Outline • Preliminaries: Dynamic Bayesian networks, feature sets, data, baselines (Karen, Simon) • Hybrid observation models (Simon) • Tandem observation models (Ozgur, Arthur) • Multistream AF-based pronunciation models (Karen, Chris, Nash, Lisa, Bronwyn) • AF-based audio-visual speech recognition (Mark, Partha) • Analysis (Nash, Lisa, Ari) BREAK • Structure learning (Steve) • Student proposals (Arthur, Chris, Partha, Bronwyn?) • Summary, conclusions, future work (Karen)

  10. Outline • Preliminaries: Dynamic Bayesian networks, feature sets, data, baselines • Hybrid observation models • Tandem observation models • Multistream AF-based pronunciation models • AF-based audio-visual speech recognition • Analysis • BREAK • Structure learning • Student proposals • Summary, conclusions, future work

  11. A B C D Bayesian networks (BNs) • Directed acyclic graph (DAG) with one-to-one correspondence between nodes and variables X1, X2, ... , XN • Node Xi with parents pa(Xi) has a “local” probability function pXi|pa(Xi) • Joint probability = product of local probabilities: p(xi,...,xN) =  p(xi|pa(xi)) p(b|a)  p(a,b,c,d) = p(a)p(b|a)p(c|b)p(d|b,c) p(c|b) p(a) p(d|b,c)

  12. frame i-1 frame i frame i+1 C C C A A B B A B D D D Dynamic Bayesian networks (DBNs) • BNs consisting of a structure that repeats an indefinite (i.e. dynamic) number of times • Useful for modeling time series (e.g. speech!)

  13. HMM DBN frame i-1 frame i frame i+1 .7 .8 1 Qi-1 Qi+1 Qi .3 .2 . . . . . . P(qi|qi-1) P(obsi | qi) 1 2 3 obsi-1 obsi+1 obsi qi 1 2 3 qi-1 q=1 1 .7 .3 0 obs q=2 2 0 .8 .2 obs obs q=3 3 0 0 1 = variable = state = allowed dependency = allowed transition Notation: Representing an HMM as a DBN

  14. Inference • Definition: • Computation of the probability of one subset of the variables given another subset • Inference is a subroutine of: • Viterbi decoding q* = argmaxqp(q|obs) • Maximum-likelihood parameter estimation * = argmax p(obs| ) • For WS06, all models implemented, trained, and tested using the Graphical Models Toolkit (GMTK) [Bilmes 2002]

  15. Outline • Preliminaries: Dynamic Bayesian networks, feature sets, data, baselines • Hybrid observation models • Tandem observation models • Multistream AF-based pronunciation models • AF-based audio-visual speech recognition • Analysis • BREAK • Structure learning • Student proposals • Summary, conclusions, future work

  16. Feature set for pronunciation modeling • Based on articulatory phonology [Browman & Goldstein 1990] • Assuming complete synchrony among lip, tongue, glottis/velum features, and limited substitutions, can combine into 3 streams

  17. Feature set for observation modeling

  18. Outline • Preliminaries: Dynamic Bayesian networks, feature sets, data, baselines • Hybrid observation models • Tandem observation models • Multistream AF-based pronunciation models • AF-based audio-visual speech recognition • Analysis • BREAK • Structure learning • Student proposals • Summary, conclusions, future work

  19. Manual feature transcriptions • Purpose: Testing of AF classifiers, automatic alignments • Main transcription guideline: Should correspond to what we would like our AF classifiers to detect

  20. Manual feature transcriptions • Main transcription guideline: The output should correspond to what we would like our AF classifiers to detect • Details • 2 transcribers: phonetician (Lisa Lavoie), PhD student in speech group (Xuemin Chi) • 78 SVitchboard utterances • 9 utterances from Switchboard Transcription Project for comparison • Multipass transcription using WaveSurfer (KTH) • 1st pass: Phone-feature hybrid • 2nd pass: All-feature • 3rd pass: Discussion, error-correction • Some basic statistics • Overall speed ~1000 x real-time • High inter-transcriber agreement (93% avg. agreement, 85% avg. string accuracy) • First use to date of human-labeled articulatory data for classifier/recognizer testing

  21. Outline • Preliminaries: Dynamic Bayesian networks, feature sets, data, baselines • Hybrid observation models • Tandem observation models • Multistream AF-based pronunciation models • AF-based audio-visual speech recognition • Analysis • BREAK • Structure learning • Student proposals • Summary, conclusions, future work

  22. SIMON: SVitchboard, baselines, gmtkTie MLPs, hybrid models

  23. OZGUR & ARTHUR: Tandem models intro Our models & results

  24. KAREN, CHRIS, NASH, LISA, BRONWYN: Multistream AF-based pronunciation models

  25. Reminder: phone-based models: frame 0 frame i last frame variable name values word {“one”, “two” ,...} 1 wordTransition {0,1} 0 subWordState {0,1,2,...} stateTransition {0,1} phoneState {w1, w2, w3, s1,s2,s3,...} observation (Note: missing pronunciation variants)

  26. Multistream pronunciation models wordTransition word wordTransitionL subWordStateL async stateTransitionL phoneStateL wordTransitionT L subWordStateT stateTransitionT phoneStateT T (differences from actual model: 3rd feature stream, pronunciation variants, word transition bookkeeping)

  27. A first attempt: 1-state monofeat • Analogous to 1-state monophone with minimum duration of 3 frames • All three states of each phone map to the same feature values • INSERT phoneState2feat TABLE HERE • One state of asynchrony allowed between L and T, and between G and {L,T}

  28. Problems with 1-state monofeat • Much higher WER than monophone—possible causes: • By collapsing three states into one, we’ve lost sequencing information—suggests further splitting AF states • Asynchrony modeling is too simple • Asynchrony is likely dependent on context, e.g. part of speech, word/syllable position, speaking rate • Asynchrony often occurs across word boundaries (e.g. “greem beans”) • Asynchrony may between streams may not be symmetric • Modeling substitutions as well as asynchrony may be crucial • Improper handling of silence • Synchronous model outperforms asynchronous one  asynchronous states may be poorly trained, suggesting • Training with different initializations • Tying states • We have addressed the above issues to varying extents—enter Chris, Nash, Lisa, Bronwyn...

  29. CHRIS, NASH, LISA, BRONWYN: Multistream AF-based pronunciation models

  30. Multistream pronunciation models: Summary • So far, our models perform worse than baseline monophone models • Much work to be done! • Better training of low-occupancy states • Improved tying strategies and tree clustering questions • Improved training schedules—e.g. incorporate Gaussian vanishing as well as splitting • Initialization from independent AF HMM alignments • Cross-word asynchrony, context-dependent asynchrony, substitution modeling have only just begun

  31. MARK & PARTHA: AVSR

  32. NASH, LISA, ARI: Analysis: Manual transcriber agreement & “canonicalness”, MLP performance analysis Recognizer error analysis FAFA

  33. BREAK

  34. STEVE: Structure learning

  35. CHRIS, PARTHA, ARTHUR, BRONWYN? Proposals

  36. KAREN: Summary & conclusions

  37. Summary • Main results & take-home messages: • Tandem AF models: Beat phone-based models, at least monophone --try them at home! • Hybrid AF models: TBA! • Multistream AF-based pron models: Close to phone-based; more work to be done • AVSR: TBA! • Embedded training: works? • Obtained improved articulatory alignments over phone-based ones • Other contributions: • gmtkTie • Manual transcriptions • Wavesurfer analysis tool • New SVB baselines (monophone & triphone)

  38. This is just the beginning... • Further experimentation with WS06 models • How do tandem and hybrid results vary with amount of data? • Hybrid results with Fisher- vs. SVitchboard-trained MLPs • Better initializations for multistream pronunciation models • Application of AVSR models to more complex tasks: connected digits, larger vocabularies, more challenging data (e.g. AVICAR) • Fulfilling ze dream • More work on combining multistream pronunciation models with new observation models (tandem, hybrid) • More work on substitution modeling, cross-word asynchrony, context-dependence • Embedded training with fancier pronunciation models • More work on alignments • Improving alignments: Best model is not necessarily the lowest-WER one • Analysis • Learning pronunciation models from aligned data

  39. Acknowledgments Jeff Bilmes (UW), Nancy Chen (MIT), Xuemin Chi (MIT), Ghinwa Choueiter (MIT), Trevor Darrell (MIT), Edward Flemming (MIT), Eric Fosler-Lussier (OSU), Joe Frankel (Edinburgh/ICSI), Jim Glass (MIT), Katrin Kirchhoff (UW), Lisa Lavoie (Elizacorp, Emerson), Mathew Magimai (ICSI), Erik McDermott (NTT), Daryush Mehta (MIT), Florian Metze (Deutsche Telekom), Kate Saenko (MIT), Janet Slifka (MIT), Stefanie Shattuck-Hufnagel (MIT), Amar Subramanya (UW) NSF DARPA DoD CLSP

  40. EXTRA SLIDES

More Related