300 likes | 405 Views
Human-Computer Interaction Research on the Endeavour Expedition. James A. Landay Jack Chen, Jason Hong, Scott Klemmer, Francis Li, Mark Newman, Anoop Sinha Endeavour Retreat January 20, 2000. Several Projects on UIs of the Future. Designer’s Outpost SUEDE Multimodal Design Assistant
E N D
Human-Computer Interaction Research on the Endeavour Expedition James A. Landay Jack Chen, Jason Hong, Scott Klemmer, Francis Li, Mark Newman, Anoop Sinha Endeavour Retreat January 20, 2000
Several Projects on UIs of the Future • Designer’s Outpost • SUEDE • Multimodal Design Assistant • Context-aware PDA Infrastructure • Context-based Information Agent
Designer’s Outpost: Tangible Tools for Information Design • Two 640x480 USB web cams • camera above desk • captures ink / IDs • camera below desk • occlusion free • captures structure • ITI Visionmaker Desk • 1280 x 1024 resolution • direct pen input • Cross iPen tablet • high-res ink capture
SUEDE: Low-fidelity Prototyping for Speech-based User Interfaces
SUEDE: Low-fidelity Prototyping for Speech-based User Interfaces
SUEDE: Low-fidelity Prototyping for Speech-based User Interfaces
A Better Future: Our Access to Information will be via Multimodal UIs Model • How do we combine speech, gesture, etc. into a UI design? • rapid production of “rough cuts” • informal (sketching / “Wizard of Oz”) • iterative design (user testing/fast mods) • generate initial code • UIs for multiple devices • designer adds detail / improve UI • Study • uses of these novel modes alone • e.g., SUEDE • construction of multimodal apps • e.g., a multimedia notebook • Early stages so far • based on inferring models
The Best Future: Multimodal UIs that are Aware of User’s Context Symbol SPT 1700 • Applications can be aware of • location • who is the user • what are they doing • who is nearby… • DARPA has funded us to purchase 25-50 PDAs w/ • wireless communications • wireless infrastructure • built-in bar code scanners • Idea • deploy across Soda • build/study interesting applications • looking for students to get involved • use Hill/Culler/Ninja infrastructure & Motes
A Context-based Information Agent: Motivation January • Proactive Meeting Support • Capture human-to-human communication and context • Use communication and context to proactively find relevant info • Present the most useful info as non-intrusively as possible Searching for relevant items… … … … …!!
Design Space Input Search Presentation
Design Space - Input • Human-Human Communication • Speech, Ink, Vision, Text • Context • Who you are? • Who is speaking? • Who else is here? • Where am I? • When is it? • What calendar event am I in? • What todo item am I doing? • How busy am I? • … Input Search Presentation
Design Space - Search • Structuring queries from input • Continuous streams of input • Simple approach - spot keywords • But, given richer sources of input, can we formulate better queries? • Which kinds of communication and context are useful for this task? • Information sources to search • Personal Information (local files) • Group Information (local web pages) • Global Information (web search) Input Search Presentation
Design Space - Presentation • Synchronous / Asynchronous • Show me as I do my work • Show me after I get back from lunch • Aim for minimal attention • Attention should be on the task • Tailor output to context • I'm busy, don't bother me at all • Show me only if it's really important Input Search Presentation
Low-Fidelity Prototype • Run a low-fidelity prototype • Quick way of testing system that doesn't exist yet • Parts requiring lots of programming simulated by human • First iteration • Speech-based agent, listens in on conversations • Uses web search engines • Presents combined search results • Basically an alternative front-end for web search engines Web search
Low-Fidelity Prototype • Combined results • Categorized by time • Categorized by topic • Material explicitly referenced • Endeavour homepage • "Do you have a link to that?" • "There's a paper from Xerox PARC…" • Related material not explicitly mentioned • UW Portolano • MIT Oxygen Project
Low-Fidelity Prototype Results • Some survey results • People liked the general concept • Search results rated ok, not great • Didn't want to spend too much time (spent more time than we expected) • Control of when agent was running (turn on and off) • Real-time results • Multiple ways of organizing, and accessing, and filtering the info
Design Space – Refined Input Search Context Personal Info Who, Where, When, How, What Calendar, Contacts, Email, Personal webpages, Notes Communication Group Info Calendar, Group Notes, Contacts, Group webpages Ink, Text, Speech Public Info Presentation Webpages, Newsgroups, Digital Libraries
Design Space - Refined Search Personal Info Calendar, Contacts, Email, Personal webpages, Notes Group Info Calendar, Group Notes, Contacts, Group webpages Public Info Webpages, Newsgroups, Digital Libraries Input Context Who, Where, When, How, What Communication Ink, Text, Speech Presentation
Prototype • Implementation • Speech-based input (IBM ViaVoice) • Start with keyword-based search • Uses Google search engine • Problems • Speech recognition is poor • "Interesting" keywords not in dictionary • Still in the process of improving the software
Continuing Work • Better organization and navigation of results • Peripheral displays • Figuring out "right" rate to update results • Another low-fi prototype using "precise" speech recognition • Improving recognition rate • ICSI Speech Recognition Engine / ViaVoice 2000 • Crawl local web pages to expand dictionary
Related Work • Cyberguide and others • Letizia • Remembrance Agent • MSR Implicit Queries • XLibris