410 likes | 547 Views
ZRE 2009 / 10 introductory talk. Honza Černocký Speech@FIT, Brno University of Technology, Czech Republic ZRE 8.2.2010. Agenda. Where we are and who we are Needle in a haystack Simple example - Gender ID Speaker recognition Language identification Keyword spotting CZ projects.
E N D
ZRE 2009 / 10 introductory talk Honza Černocký Speech@FIT, Brno University of Technology, Czech Republic ZRE 8.2.2010
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
Where is Brno ? ZRE Honza Cernocky 8.2.2010
The place • Brno University of Technology – 2nd largest technical university in the Czech Republic (~2500 staff, ~18000 students). • Faculty of Information Technology (FIT) – its youngest faculty (created in January 2002). • Reconstruction of the campus finished in Nov 2007 – now a beautiful place marrying old cartusian monastery and modern buildings. ZRE Honza Cernocky 8.2.2010
Department of Computer Graphics and Multimedia • Video/image processing • Speech processing • Knowledge engineering and natural language processing • Medical visualization and 3D modeling http://www.fit.vutbr.cz/units/UPGM/ ZRE Honza Cernocky 8.2.2010
Speech@FIT • University research group established in 1997 • 20 people in 2009 (faculty, researchers, students, support staff). • Provides also education within Dpt. of Computer Graphics and Multimedia. • Cooperating with EU and US universities and companies. • Supported by EC, US and national projects Speech@FIT’s goal: high profile research in speech theory and algorithms
Key people • Directors: • Dr. Jan “Honza” Černocký - Executive direction • Prof. Hynek Heřmanský - (Johns Hopkins University, USA) advisor and guru • Dr. Lukáš Burget – Scientific director • Sub-group leaders: • Petr Schwarz – phonemes, implementation • Pavel “Pája” Matějka – SpeakerID, LanguageID • Pavel Smrž – NLP and semantic Web ZRE Honza Cernocky 8.2.2010
The steel and soft … Steel • 3 IBM Blade centers with 42 IBM Blade servers à 2 dual-core CPUs • Another ~120 computers in class-rooms • >16 TB of disk space • Professional and friendly administration Soft • Common: HTK, Matlab, QuickNet, SGE • Own SW: STK, BS-CORE, BS-API ZRE Honza Cernocky 8.2.2010
Speech@FIT funding • Faculty(faculty members and faculty-wide research funds) • EU projects (FP[4567]) • Past: SpeechDat-E, SpeeCon, M4, AMI, CareTaker. • Running: AMIDA, MOBIO, weKnowIt. • US funding – Air Force’s EOARD • Local funding agencies - Grant Agency of Czech Republic, Ministry of Education, Ministry of Trade and Commerce • Czech “force” ministries – Defense, Interior • Industrial contracts • Spin-off – Phonexia, Ltd. ZRE Honza Cernocky 8.2.2010
Phonexia Ltd. • Company created in 2006 by 6 Speech@FIT members • Closely cooperating with the research group • Key people • Dr. Pavel Matějka, CEO • Dr. Petr Schwarz, CTO • Igor Szöke, CFO • Dr. Lukáš Burget, research coordinator • Dr. Jan Černocký, university relations • Tomáš Kašpárek, hardware architect Phonexia’s goal: bringing mature technologies to the market, especially in the security/defense sector
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
Needle in a haystack • Speech is the most important modality of human-human communication (~80% of information) … criminals and terrorists are also communicating by speech • Speech is easy to acquire in both civilian and intelligence/defense scenarios. • More difficult is to find what we are looking for • Typically done by human experts, but always count on: • Limited personnel • Limited budget • Not enough languages spoken • Insufficient security clearances Technologies of speech processing are not almighty but can help to narrow the search space.
“Speech recognition” Speaker Recognition Speaker Name John Doe Gender Recognition Gender Male or Female Speech Language Recognition Language English/German/?? Speech Recognition What was said. “Hallo Crete!” Keyword spotting “Crete” spotted GOAL: Automatically extract information transmitted in speech signal ZRE Honza Cernocky 8.2.2010
Focus on evaluations • „I'm better than the other guys“ – not relevant unless the same data and evaluation metrics for everyone. • NIST – US Government Agency, http://www.nist.gov/speech • Regular benchmark campaigns – evaluations – of speech technologies. • All participants have the same data and have the same limited time to process them and send results to NIST => objective comparison. • The results and details of systems are discussed at NIST workshops. • Speech@FIT extensively participating in NIST evaluations: • Transcription 2005, 2006, 2007, 2009 • Language ID 2003, 2005, 2007, 2009 • Speaker Verification 1998, 1999, 2006, 2008, • Spoken term detection 2006 • Why are we doing this ? • We believe that evaluations are really advancing the state of the art • We do not want to waste our time on useless work …
What we are really doing ? Following the recipe from any pattern-recognition book: ZRE Honza Cernocky 8.2.2010
And what is the result ? Models Feature extraction Evaluation of probabilities or likelihoods “Decoding” input decision Something you’ve probably already seen: ZRE Honza Cernocky 8.2.2010
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
The simplest example … GID Gender Identification • The easiest speech application to deploy … • … and the most accurate (>96% on challenging channels) • Limits search space by 50%
So how is Gender-ID done ? Gaussian Mixture models – boys, girls MFCC Evaluation of GMM likelihoods Decision input Male/female ZRE Honza Cernocky 8.2.2010
Features – Mel Frequency Cepstral Coefficients • The signal is not stationary • And the hearing is not linear ZRE Honza Cernocky 8.2.2010
The evaluation of likelihoods: GMM ZRE Honza Cernocky 8.2.2010
The decision – Bayes rule. GID DEMO ZRE Honza Cernocky 8.2.2010
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
Speaker recognition Target model Adapt Front-end processing score normalization S L Background model • Speaker recognition aims at recognizing "who said it". • In speaker identification, the task is to assign speech signal to one out of N speakers. • In speaker verification, the claimed identity is known and the question to be answered is "was the speaker really Mr. XYZ or an impostor? ZRE Honza Cernocky 8.2.2010
Bad session variability Example: single Gaussian model with 2D features Target speaker model UBM High speaker variability High inter-session variability 25 ZRE Honza Cernocky 8.2.2010
And what to do about it Target speaker model Test data For recognition, move both models along the high inter-session variability direction(s) to fit well the test data UBM High inter-speaker variability High inter-session variability ZRE Honza Cernocky 8.2.2010
Research achievements <- NIST SRE 2006: • BUT • STBU consortium NIST SRE 2008 -> • confirming leading position Key thing: • Joint Factor Analysis (JFA) decomposes models into channel and speaker sub-spaces. • Coping with unwanted variability • In the same time, compact representation of speakers allowing for extremely fast scoring of speech files. Speaker search DEMO
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
The goal of language ID LID • Determine the language of a speech segment ZRE Honza Cernocky 8.2.2010
Two main approaches to LID • Acoustic – Gaussian Mixture Model • Phonotactic – Phone Recognition followed by Language Model ZRE Honza Cernocky 8.2.2010
Acoustics • Good for short speech segments and dialect recognition • Relies on the sounds • Done by discriminatively trained GMMs with channel compensation ZRE Honza Cernocky 8.2.2010
Phonotactic approach • good for longer speech segments • robust against dialects in one language • eliminates speech characteristics of speaker's native language • Based on high-quality NN-based phone recognizer … producing strings or lattices ZRE Honza Cernocky 8.2.2010
Phonotactic modeling - example German English Test • N-gram language models – discounting, backoff • Binary decision trees – adaptation from UBM • Support Vector Machines – vectors with counts ZRE Honza Cernocky 8.2.2010
Research achievements ara F 0.0 eng F 0.0 far F 0.0 fre T 99.9 ger F 0.0 hin F 0.0 jap F 0.0 kor F 0.0 man F 0.0 spa F 0.0 tam F 0.0 vie F 0.0 ara F 0.0 eng T 93.3 far F 0.0 fre F 0.3 ger F 4.9 hin F 0.0 jap F 0.0 kor F 0.0 man F 1.3 spa F 0.0 tam F 0.0 vie F 0.1 ara F 0.0 eng F 15.1 far F 0.0 fre F 0.0 ger T 84.7 hin F 0.0 jap F 0.0 kor F 0.0 man F 0.0 spa F 0.0 tam F 0.0 vie F 0.0 ara T 42.9 eng F 1.7 far F 12.9 fre F 0.0 ger F 0.0 hin F 11.2 jap F 0.9 kor F 22.2 man F 0.0 spa F 0.1 tam F 7.4 vie F 0.1 NIST evaluation results: • LRE 2005 – Speech@FIT the best in 2 out of 3 categories • LRE 2007 – confirmation of the leading position. • LRE 2009 – a bit of bad luck but very good post-submission system Key things: • Discriminative modeling • Channel compensation • Gathering training data from public sources Web demo: http://speech.fit.vutbr.cz/lid-demo/
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010
Keyword spotting • What ? Which recording and when ? Confidence ? • Comparing keyword model output with an anti-model. • The choices: • What is the needed tradeoff between speed and accuracy? • How to cope with the “devil” of keyword spotting: Out of Vocabulary (OOV) words • Technical approaches • Acoustic keyword spotting • Searching in an output of Large Vocabulary Continuous speech recognizer (LVCSR) • Searching in an output of LVCSR completed with sub-word units.
Acoustic KWS • Jno problem with OOVs • Indexing not possible – need to go through everything • Jdown to 0.01xRT • Does not have the strength of LM – problem with short words and sub-words. • Model of a word against a background model. • No language model ZRE Honza Cernocky 8.2.2010
Searching in the output of LVCSR • LVCSR, then search • in 1-best or lattice. • Indexing possible • Jspeed of search • Jmore precise on frequent words. • Llimited by LVCSR vocabulary - OOV • LVCSR is more complex and slower. ZRE Honza Cernocky 8.2.2010
Searching in the output of LVCSR + sub-words • J Speed of search preserved • Precision on frequent words preserved. • Allows to search OOVs without additional processing of all data. • LVCSR and indexing are more complex. • LVCSR with words and sub-word units. • Indexing of both words and sub-word units ZRE Honza Cernocky 8.2.2010
Research achievements NIST STD 2006 – English MV Task 2008 – Czech Key things: • Expertise with acoustic, word and sub-word recognition • Excellent front-ends – LVCSR and phone recognizer. • Speech indexing and search • Normalization of scores. DEMO – Russian acoustic KWS
Agenda • Where we are and who we are • Needle in a haystack • Simple example - Gender ID • Speaker recognition • Language identification • Keyword spotting • CZ projects ZRE Honza Cernocky 8.2.2010