120 likes | 276 Views
A study on Prediction on Listener Emotion in Speech for Medical Doctor Interface. M.Kurematsu Faculty of Software and Information Science Iwate Prefectural University. Virtual Medical Doctor System. What is Virtual Medical Doctor System? Diagnoses people like a human doctor
E N D
A study on Prediction on Listener Emotion in Speech for Medical Doctor Interface M.Kurematsu Faculty of Software and Information Science Iwate Prefectural University
Virtual Medical Doctor System What is Virtual Medical Doctor System? • Diagnoses people like a human doctor • Interact with people like a person What should the speech interface module? • Speech Recognition • Understands what people say • Estimates Emotion in Speech • Speech Synthesize • Tells diagnosis, Asks people • Expresses Emotion in Speech Research Target Research Target
How to Estimate Emotion in Speech 1 Conventional Approach from Exists Works Learning Phase Step.1 Collect Human Speeches Human speech has Sound data and Emotion putted by Speaker Step.2 Feature Selection Step.2-1 Get Speech Features Step.2-2 Calculate Statistics Values Step.3 Make Classifiers of Speech Features Extracts Relation between Emotion and Speech Features Estimate Phase Step.1 Record Human Speech Step.2 Feature Selection Get Speech Features & Calculate Statistics Values Step.3 Estimate Emotion Using Classifiers
How to Estimate Emotion in Speech 2 Our Approach We modify the Learning Phase in Conventional Approach
Evaluate Our Approach Based on Experiments We should modify this module more and more
Future Works about Estimation • For Collect Speech • Subdivide Emotion by Expression Patterns • Collect Speeches more (Radio, TV, Movies etc.) • For Feature Selection • Focus on Other Features • E.g. Self-Correlation, LPC Spectrum etc. • Focus on Other Statistics Values • E.g. correlation between some speech features • For Make Classifiers • Using Other Machine Learning Methods • E.g. Bagging
Express Emotion in Speech Somet09(6) How does system Express Emotion in Speech? • Adjust Speech features to Emotion Based on Relations between Emotion & Features • Speech Features= Pitch, Volume, Speed etc • Relation shows • How does a system change speech features to express an emotion. How do we make relations? • Developer defines based on his experience • Extract from Speech and Emotion estimated by People • People hear speeches and estimate emotion Our Approach
How to Make our Speech Module Somet09(7) Extract Relations between Emotion and Speech features • Synthesize some speeches whose features are difference each other • To synthesize speeches, we use SMARTALK powered by OKI Co. • We use difference parameter set each synthesized speeches • Parameter={ Volume, Speed, Pitch, Intonation } • People estimate emotion in synthetic speeches and answer emotion • 14 men and 10 women answered • Defined Parameters as Relation • We select a parameter set. Most people answered same emotion in a speech which synthesized with this parameter set • We select 3 parameter sets for each emotion. Synthesize Speech to Express Emotion • We give a phrase and emotion to the module. • The module selects relation (a parameter set) and sets them. • The module synthesizes speech
Snap Shot of Our System Text Box + Input a phrase Development Environment +OS Windows XP sp3 +Language Visual C++ 6.0 +Library Smartalk.OCX or SAPI Synthesize Button + Synthesize Speech with Emotion written on a Button + [SPEAK] means Not to express Emotion
Future Works about Synthesize • Modify Relation (Parameter Set) • People evaluated this module • We demonstrated this module in a local museum and asked the following question “Is synthesized speech like a human speech?” • Answer: Yes=50, Moderately Yes=147, Moderately No= 133, No=27 • We need to modify Relation to synthesize speech like a human • Change other parameters • Give variety to parameters • Add Stress and Pause • Etc.
Appendix • I showed a next slide on the Workshop • I showed the content of that slide in preceding slides.
Estimate Emotion in Speech Step Well-Known Our Approach Speech DATA Human Speech intentional Synthesized voice Analysis Estimate Features Pitch & Power measure +Difference / Ratio Features +SD + Kurtosis + Skewness mean・Max Making Classifier A Classifier for all emotions Classifier for each emotion Classifier Using to estimate emotion in speech