130 likes | 137 Views
Component Score Weighting for GMM based Text-Independent Speaker Verification Liang Lu. luliang07@gmail.com. SNLP Unit, France Telecom R&D Beijing 2008-01-21. Outline. Introduction Conventional LLR and Motivation for detailed score processing Component Score Weighting Experimental Results
E N D
Component Score Weighting for GMM based Text-Independent Speaker VerificationLiang Lu luliang07@gmail.com SNLP Unit, France Telecom R&D Beijing 2008-01-21
Outline • Introduction • Conventional LLR and Motivation for detailed score processing • Component Score Weighting • Experimental Results • Conclusion
Introduction State of the art GMM-UBM framework • GMM based model construction • Log-likelihood Ratio (LLR) based decision making • Score Normalisation (Tnorm, Hnorm, etc) for robustesses
Introduction Major challenges • Limited data for speaker model training • Mismatch between training and testing data
Motivation for Component Score Weighting Motivation • The insufficiency of training data and mismatch between training and testing condition make the mixtures in GMM different in discriminative capability • The LLR just sum the score of each mixture without considering its reliability • Does it helpful if LLR considers the discriminative capability of each mixture? Question If it does, how to explore the discriminative capabilities of Gaussian Component Mixtures
Component Score Weighting Our Method First, scatter the LLR to each Gaussian mixture Where, the k-th mixture is dominant for frame , namely, Let we callis the dominant score and is the residual score
Component Score Weighting Extend the original LLR After doing this, the original LLR will be spitted into two score serials, dominant score serial and residual score serial Original: If we consider the discriminative capacity of each Gaussian mixture Extended: in original LLR
Component Score Weighting Now the question is: How can we know the discriminative capability of each Gaussian mixture and what the should be? Our assumption: We believe that the high dominant scores will have better discriminative capability and should be highlighted.
Component Score Weighting Why the high dominant scores? • If the test utterance is from the target speaker, then more components in GMM should get high value compared with UBM. • If the utterance is form imposter, then high-valued components in GMM are hardly more UBM. • If the test utterance is from the target speaker, the low-valued components in GMM is due to the mixtures are not well trained or mismatch exists between training and testing data.
Component Score Weighting We simply used an exponential function as the weighting function The residual scores have little importance and we ignore them finally. The final LLR score is as follows: Restrained Emphasized
Experimental Results Table: Results for GMM baseline and GMM with Component Score Weighting with TNorm Experiments are performed in the 1conv4w-1conv4w task of the 2006 NIST SRE corpora
Conclusion • Split the LLR score and consider the discriminative capacity of Gaussian mixtures is helpful to cope with the insufficiency of training data and mismatch between training and testing condition. • The score weighting function should be coincident with the component score distribution and discriminative capacity. • The exponential weighting function used in this investigation is not universal and also may not optimal. More work is needed to explore an optimal weighting function.