80 likes | 275 Views
Multimodal. Interests. Incremental understanding and feedback Mobile multimodal systems Producing and understanding nonverbal communication (e.g. facial expression) Situated interaction Multiple channels to increase robustness Web-based systems How do gestures contribute to interaction?
E N D
Interests • Incremental understanding and feedback • Mobile multimodal systems • Producing and understanding nonverbal communication (e.g. facial expression) • Situated interaction • Multiple channels to increase robustness • Web-based systems • How do gestures contribute to interaction? • Gesture production for Embodied Conversational Agents • Automotive environments • Balance between gesture and speech input • Learning from multimodal feedback • Evaluation, usability – does it help? • Next killer app? 2 years? 5 years? 10 years?
2 groups • 2 major threads of interest: • Group 1: Study how humans interact with each other multimodaly • Group 2: Study how to build multimodal interfaces now • How should we inform each others’ research? • 1-> 2 Alignment for multimodal fusion • 2->1 What is useful to study, naturalness isn’t everything
Resources • Markup languages • EMMA, SALT, XHTML+Voice, SMIL • BML, FML, CML: behavorial & functional markup languages • We should get involved with standards • Platform trajectory • Step 1: deal with timing issues on multimodal alignment • Step 2: Future platforms
Other Communities • Human robot interaction • Autonomous agents • Embodied Conversational Agents • Psycholinguistics • Vision, other sensors
Now, 2 years, 5 years, … • Now • Using GUI for fallback strategies (error correction, text input) • Asynchronous input (e.g. pen input) • Eyes busy scenarios (automotive) • Soon • Presence-aware advertising and interaction • Robots