70 likes | 259 Views
Ken Hinckley Microsoft Research Collaborators: Jeff Pierce, Mike Sinclair, Eric Horvitz. Sensing & Mobility. Disclaimer: Opinions are Ken’s only and may not reflect the views of Microsoft, or anyone else for that matter . Sense more than just explicit commands
E N D
Ken Hinckley Microsoft Research Collaborators: Jeff Pierce, Mike Sinclair, Eric Horvitz Sensing & Mobility • Disclaimer: • Opinions are Ken’s only and may not reflect the views of Microsoft, or anyone else for that matter
Sense more than just explicit commands Background info (presence, sounds, physical contact, …) can be sensed & exploited Simplify & enhance the user experience via better awareness of context Background Sensing: Can it Live Up to its Promise?
What are the unique input challenges for mobility? Mobile devices used under far more demanding conditions than desktop (e.g while driving) Even “click on button” difficult due to attentional demands Provide services/enhancements the user would not have cognitive resources or time to perform. Device automatically senses what it needs? Push? Getting text into device? Communication/read-only? Environs changing, but what properties to sense? Do you need lots of sensor fusion to do anything useful? Sensing only useful for small, task-specific devices? Sensing: What’s Unique about Mobile Devices?
Dilemma: Intentional Control vs. Cognitive / Physical Burden • Button-click Touch Hand-near-device • less intentional control, more SW inferential burden • Use these “problems” to your advantage? • Decrease cognitive burden to make decisions • {Touching, …} is something the user must do anyway? • Benefit vs. cost of errors (intent / interpretation) • Is sensing necessarily annoying if “wrong”? • How to quantify penalty of failure? • Perhaps the key is designing for graceful failure? • Need to encourage mental model of what’s sensed?
Few interaction techniques, almost no studies What do users expect? Do they like sensing? Do they care? How to overcome false positives / negatives? Evaluative / scientific approach to sensing UI’s? Automatic action vs. user control & overrides Limits? Where is explicit user input necessary? “Special cases” & complexity of mobile environments may confuse sensors / override with noise What are some issues / tradeoffs? Quick access vs. inadvertent activation of features Sensor & display quality vs power consumption cost, weight, features vs. UI complexity, … Background Sensing: Some Open Issues