230 likes | 405 Views
Rethink Possible. Multimodal Interaction in Speak4it. Patrick Ehlen AT & T. This talk will discuss…. Multimodal interaction approaches mode choice mode integration Grounding (It’s context!) Grounding in multimodal local search. What is multimodal interaction?.
E N D
Rethink Possible Multimodal Interaction in Speak4it Patrick Ehlen AT&T
This talk will discuss…. • Multimodal interaction approaches • mode choice • mode integration • Grounding (It’s context!) • Grounding in multimodal local search
What is multimodal interaction? • The most common implementation of “multimodal interaction” – mode choice • Let people use more than one mode of input or output • Input: Graphical UI or voice (ASR) • Output: Visual (graphics) or voice (TTS) • Interact using one mode at a time
Another approach…. • mode integration • Use more than one mode at the same time • Provide simultaneous information usingdifferent channels • Combine information from different modes into one interpretation “Italian restaurants near here”
Advantages…. • It’s natural (underspecification is the norm) • Adapt to environment • Speech can be shorter and more simple and/or communicate more complex information • Complete tasks more quickly “Italian restaurants near here”
Advantages…. • Some content is better communicated by modes other than speech (e.g., gesturing to communicate spatial information) • Information from different modes can complement one other and resolve ambiguities (“mutual compensation”) “Italian restaurants near here”
History of research prototypes • MATCH (Johnston et al 2002) • AdApt (Gustafson et al 2000) • SmartKom mobile (Wahlster 2006) • Multimodal Interactive Maps (Oviatt 1997)
The Next Big Thing? • New technologies (touch screens, GPS, accelerometer data, video-based recognition) will spur an evolution in multimodal interface design • Beyond mode choice to mode integration • Speak4itsm – only commercially available product we know of that performs multimodal integration at semantic level • Available for free on iPhone, iPad, Touch
Multimodal interaction in Speak4It • Speak4it gesture inputs • point, line, area (drawn with finger) • when user hits ‘Speak/Draw’ button map display becomes a drawing canvas
Multimodal integration provides more headaches for designers • Problems: • More ‘dimensions’ of context • Demands more focus on “common ground” and aspects of knowledge that have already been grounded with users (Clark 1996)
What is grounding? • Mutual knowledge: Things that all parties in a conversation know, and know that other parties in the conversation also know • shared physically, linguistically, or via community • When people introduce references, either verbally or by other means, they are grounding those references • In dialogue, grounding helps to determine what people say, and what they don’t say • What we do or don’t say reveals a lot about aspects of context we believe are already shared
Grounding in telephony queries • Search queries are very basic dialogue • Single exchange of query & response • Telcos have dealt with thesequeries for a long time…. What listing please? Cable Car Pizza Here’s that number….
Grounding in telephony queries • 411 systems assumed an implicit grounded location because phones had a fixed location (tied to area code) • To refer to another location, you called a different area code • The area code provided a source of mutualknowledge about the grounded location in a query What listing please? Cable Car Pizza in San Francisco Please call 415-555-1212
Then phones lost their tethers (and their implicit grounding mechanisms)…. • With mobile phones, not as much shared knowledge about location • Location became “part of the conversation” again • Spoken query dialogue systems: • Google-411, Bing-411, 800-Yellowpages • Phone apps • etc What City and state? San Francisco, California What listing? Cable Car Pizza
Evidence of grounding problemsfound in Speak4it Logs • Frequency of specific locations in queries: 18% • “police department in jessupmaryland” • “office depot linden boulevard” • Most are unlocated: • “gas station” • “saigon restaurant” • Location grounding breaking down: • “Serendipity” • … followed shortly by • “Serendipity Dallas Texas” • Corrections: • “Starbucks Cape Girardeau” • … followed six minutes later by • “Lowes” • .. then right away • “Lowes Cape Girardeau”
Location grounding sources in multimodal mobile search “italian restaurants” GESTURE GUI VERBAL PHYSICAL Location shown on map display User’s current location (GPS) Where usertouched Place spoken in prior query “Sorry I could not find french restaurants in madison”
Example ✔ “new york, new york” “pizza restaurants” <scroll>
Collecting grounding data in the wild • Gathered ground truth from users when they are “in the wild” • Present users with a “grounded location disambiguation” screen to collect user-reportedintentions • Display to ~20% of unlocated queries • Use these data to train a context model and to judge model comparisons
Selected grounded locations • (relative to presentation) 69.29% 13.59% 38.04% 64.37%
Speak4it multimodal architecture Multimodal Search Platform Gesture Recognition Gesture Recognition LocationGrounding Speech Platform Inktrace SalientLocation ASR Features ASR Gestures Multimodal http data stream (speech, text, ink) SLM Interaction Manager Audio Parsedstring NLU NL model Results/ Requests Locationstring Lat/Lon Results Query Search Geo-coder Listings index Geo index
Conclusions • Multimodal UIs will soon move from mode choice to mode interaction • We’ll need richer context models to predict grounding of locations and other references across modes, to align system actions with user expectations • Mobile voice searchers don’t always consider their “GPS” location as the grounded one; location shown on the map is considered grounded 37% of the time • User groundings from touch are highly salient
Acknowledgments • Thanks to Jay Lieske, Clarke Retzer, Brant Vasilieff, DiamantinoCaseiro, JunlanFeng, Srinivas Bangalore, Claude Noshpitz, Barbara Hollister, RemiZajac, Mazin Gilbert, Barbara Hollister, and Linda Roberts for their contributions to Speak4it.