220 likes | 326 Views
Survey ICASSP 2007 Discriminative Training. Reporter: Shih-Hung Liu 2007/04/30. References. Large-Margin Minimum Classification Error Training for Large-Scale Speech Recognition Tasks Dong Yu, Li, Deng Xiaodong He, Alex Acero , Microsoft
E N D
Survey ICASSP 2007Discriminative Training Reporter: Shih-Hung Liu 2007/04/30
References • Large-Margin Minimum Classification Error Training for Large-Scale Speech Recognition Tasks • Dong Yu, Li, Deng Xiaodong He, Alex Acero, Microsoft • Approximate Test Risk Minimization Through Soft Margin Estimation • Jinyu Li, Sabato Marco, Chin-Hui Lee, Georgia • Unsupervised Training for Mandarin Broadcast News and Conversation transcription • L. Wang, M.J.F. Gales, P.C. Woodland, Cambridge • A New Minimum Divergence Approach to Discriminative Training • J. Du, P. Liu, H. Jiang, F.K. Soong, R.H. Wang, Microsoft Asia
LM-MCE • The basic idea of LM-MCE is to include the margin in the optimization criteria along with the smoothed empirical error rate and make the correct samples classified well far away from the decision boundary • To successfully incorporate the margin, we proposed increasing the discriminative margin gradually over iterations
Using Parzen window LM-MCE
LM-MCE define symmetric kernel function Margin-free Bayes Risk
Soft Margin Estimation • Test risk bound expressed as a sum of an empirical risk and a function of VC dimension • Approximate test risk minimization • Define loss function
Unsupervised Training • Segmentation: • First, advert removal is run. Here the arithmetic harmonic sphericity distance is used to detect repeated blocks of audio data, for example jingles or commercials. • Acoustic segmentation is performed. The data is then split into wide-band and narrow-band speech. • Sections of music are discarded. • Finally gender detection and speaker clustering are run.
Unsupervised Training • Transcription generation: • Initial transcriptions are generated using good acoustic models, MPE trained in this work • • P1: gender-independent models are used to generate initial transcriptions using a trigram language model and relatively tight beamwidths. • • P2: the 1-best hypothesis from the P1 stage is used to generate adaptation transforms. Here least squares linear regression and diagonal variance transforms are estimated. Using the adapted models lattices are generated using a trigram language model. These lattices are then rescored using a 4-gram language model.
A New Minimum Divergence Approach • MD possesses the following advantages: • 1. It is with higher resolution than any label comparison based error definition. • 2. It is a general solution in dealing with any kinds of models and phone sets. • As a result, MD outperforms other DT criteria on several tasks • It is notable that in MD, the accuracy term is a function of model parameters. Hence, we can also take it into consideration in the optimization process
A New Minimum Divergence Approach • MD criterion • Joint optimization • It satisfies the conditions of the weak-sense auxiliary function
A New Minimum Divergence Approach • With state frame independent assumption
A New Minimum Divergence Approach • Statistics for EBW