300 likes | 510 Views
專題研究 (2) Feature Extraction, Acoustic Model Training WFST Decoding. Prof. Lin-Shan Lee, TA. Yun-Chiao Li. Announcement. You will probably have many questions from today Go to ptt2 “SpeechProj” Your problem can probably help others. Linux Shell Script Basics.
E N D
專題研究 (2)Feature Extraction,Acoustic Model TrainingWFST Decoding Prof. Lin-Shan Lee, TA. Yun-Chiao Li
Announcement • You will probably have many questions from today • Go to ptt2 “SpeechProj” • Your problem can probably help others
Linux Shell Script Basics • echo “Hello” (print “hello” on the screen) • a=ABC (assign ABC to a) • echo $a (will print ABC on the screen) • b=$a.log (assign ABC.log to b) • cat $b > testfile (write “ABC.log” to testfile) • 指令 -h (will output the help information)
Feature Extraction 02.01.extract.feat.sh 02.02.convert.htk.feat.sh
02.02.convert.htk.feat.sh • Hidden Markov Model Toolkit (HTK) is the model we used to use • In this project, we learn Kaldi • Vulcan provide an interface to convert one to another • Type “bash 02.02.convert.htk.feat.sh” • The feature will then be converted to HTK format
Acoustic Model Training 03.01.mono0a.train.sh
Acoustic Model 10 • Hidden Markov Model/Gaussian Mixture Model • 3 states per model • Example
Acoustic model training (1/2) • When training acoustic model, we need labelled data material/train.txt • 03.01.mono0a.train.sh Lacks the information to train initialized the HMM model with equally aligning frame to each state Gaussian Mixture Model (GMM) accumulation and estimation. you might want to check “HMM Parameter Estimation ” in HTK Book, or “HMM problem 3” in course
Acoustic model training (2/2) Refine the alignment in some specific iterations, (in variable realign_iters)
FST • An FSA “accepts” a set of strings • View FSA as a representation of a possibly infinite set of strings • Start state(s) bold; final/accepting states have extra circle. • This example represents the infinite set {ab, aab, aaab , . . .}
WFST • Like a normal FSA but with costs on the arcs and final-states • Note: cost comes after “/”, For final-state, “2/1” means final-cost 1 on state 2. • This example maps ab to (3 = 1 + 1 + 1), all else to 1.
WFST Composition • Notation: C = A B means, C is A composed with B
WFST Component • HCLG = H。C。L。G • H: HMM structure • C: Context-dependent relabeling • L: Lexicon • G: language model acceptor
WFST Component Where is C ? (Context-Dependent) H (HMM) L(Lexicon) G (Language Model)
Training WFST 03.02.mono0a.mkgraph.sh
Decoding WFST 03.03.mono0a.fst.sh
Decoding WFST (1/2) • From HCLG we have… • the relationship from state -> word • We need another WFST, U • Compose U with HCLG, ie, S = U。HCLG • Search the best path(s) on S is the recognition result
Decoding WFST (2/2) • During decoding, we need to specify the weight respectively for acoustic model and language model • Split the corpus to Train, Test, Dev set • Training set used to training acoustic model • Test all of the acoustic model weight on Dev set, and use the best • Test set used to test our performance (Word Error Rate, WER)
Homework 02.01~03.04.sh
To Do • Copy data into your own directory • cp –r /share/ • Execute the following command: • bash 01.format.data.sh • bash 02.01.extract.feat.sh • bash 02.02.convert.htk.feat.sh • … • Observe the output and report • You might want to check HTK book for acoustic model training
Some Helpful References • “使用加權有限狀態轉換器的基於混合詞與次詞 以文字及語音指令偵測口語詞彙” – 第三章 • https://www.dropbox.com/s/dsaqh6xa9dp3dzw/wfst_thesis.pdf