280 likes | 572 Views
7- Speech Recognition (Cont’d). HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training In HMM. Speech Recognition Concepts. Speech recognition is inverse of Speech Synthesis. Speech. Text. NLP. Speech Processing.
E N D
7-Speech Recognition (Cont’d) • HMM Calculating Approaches • Neural Components • Three Basic HMM Problems • Viterbi Algorithm • State Duration Modeling • Training In HMM
Speech Recognition Concepts Speech recognition is inverse of Speech Synthesis Speech Text NLP Speech Processing Speech Synthesis Understanding NLP Speech Processing Speech Phone Sequence Text Speech Recognition
Speech Recognition Approaches • Bottom-Up Approach • Top-Down Approach • Blackboard Approach
Bottom-Up Approach Signal Processing Voiced/Unvoiced/Silence Feature Extraction Segmentation Sound Classification Rules Signal Processing Knowledge Sources Phonotactic Rules Feature Extraction Lexical Access Segmentation Language Model Segmentation Recognized Utterance
Top-Down Approach Inventory of speech recognition units Word Dictionary Task Model Grammar Semantic Hypo thesis Syntactic Hypo thesis Unit Matching System Lexical Hypo thesis Feature Analysis Utterance Verifier/ Matcher Recognized Utterance
Blackboard Approach Acoustic Processes Lexical Processes Black board Environmental Processes Semantic Processes Syntactic Processes
top down An overall view of a speech recognition system bottom up From Ladefoged 2001
Recognition Theories • Articulatory Based Recognition • Use from Articulatory system for recognition • This theory is the most successful until now • Auditory Based Recognition • Use from Auditorysystem for recognition • Hybrid Based Recognition • Is a hybrid from the above theories • Motor Theory • Model the intended gesture of speaker
Recognition Problem • We have the sequence of acoustic symbols and we want to find the words that expressed by speaker • Solution : Finding the most probable word sequence having Acoustic symbols
Recognition Problem • A : Acoustic Symbols • W : Word Sequence • we should find so that
Simple Language Model Computing this probability is very difficult and we need a very big database. So we use from Trigram and Bigram models.
Simple Language Model (Cont’d) Trigram : Bigram : Monogram :
Simple Language Model (Cont’d) Computing Method : Number of happening W3 after W1W2 Total number of happening W1W2 AdHoc Method :
7-Speech Recognition • Speech Recognition Concepts • Speech Recognition Approaches • Recognition Theories • Bayse Rule • Simple Language Model • P(A|W) Network Types
P(A|W) Computing Approaches • Dynamic Time Warping (DTW) • Hidden Markov Model (HMM) • Artificial Neural Network (ANN) • Hybrid Systems
Dynamic Time Warping Method (DTW) • To obtain a global distance between two speech patterns a time alignment must be performed Ex : A time alignment path between a template pattern “SPEECH” and a noisy input “SsPEEhH”
Recognition Tasks • Isolated Word Recognition (IWR) And Continuous Speech Recognition (CSR) • Speaker Dependent And Speaker Independent • Vocabulary Size • Small <20 • Medium >100 , <1000 • Large >1000, <10000 • Very Large >10000
Error Production Factor • Prosody (Recognition should be Prosody Independent) • Noise (Noise should be prevented) • Spontaneous Speech
Artificial Neural Network . . . Simple Computation Element of a Neural Network
Artificial Neural Network (Cont’d) • Neural Network Types • Perceptron • Time Delay • Time Delay Neural Network Computational Element (TDNN)
Artificial Neural Network (Cont’d) Single Layer Perceptron . . . . . .
Artificial Neural Network (Cont’d) Three Layer Perceptron . . . . . . . . . . . .
Hybrid Methods • Hybrid Neural Network and Matched Filter For Recognition Acoustic Features Output Units Speech Delays PATTERN CLASSIFIER
Neural Network Properties • The system is simple, But too much iterative • Doesn’t determine a specific structure • Regardless of simplicity, the results are good • Training size is large, so training should be offline • Accuracy is relatively good
Hidden Markov Model • Observation : O1,O2, . . . • States in time : q1, q2, . . . • All states : s1, s2, . . . Sj Si