250 likes | 358 Views
Madhumita Behera Susanne Edevåg Weiwei Liu Magnus Lorentzon Joel Sandlund. When I adress a system, how does it know I am adressing it? When I ask a system to do something, how do I know it is attending? When I issue a command, how does the system know what it relates to?
E N D
Madhumita Behera Susanne Edevåg Weiwei Liu Magnus Lorentzon Joel Sandlund
When I adress a system, how does it know I am adressing it? When I ask a system to do something, how do I know it is attending? When I issue a command, how does the system know what it relates to? How do I know the system understands my command, and is correctly executing my intended action? How do I recover from mistakes?
The Topic of the Paper When designing traditional GUIs, solutions to common interaction issues are provided by copying existing ideas and using standardized toolkits. In ubiquitous computing, there seldom are any given solutions to these kind of problems. Inspired by HHI, the paper presents a framework for adressing common design challenges in ”sensing systems”.
The Topic of the Paper • When adressing these issues, the paper explores the possibility of applying theories of human-human interaction (HHI) to the science of HCI: • Signals used to.. • adress each other • ignore each other • show intention to initiate conversation • show availability for communication • show understanding of what has been said • …
Five Interaction Challenges • Adress: Directing communication to a system • Attention: Establishing that the system is attending • Action: Defining what is to be done with the system • Alignment: Monitoring system response • Accident: Avoiding or recovering from errors or misunderstanings.
1. Adress How do I adress one (or more) of many possible devices? The GUI solution: Keyboard, mouse, etc.
1. Adress In ubiquitous computing: Analyses of HHI show that humans make use of a formidable array of non-verbal mechanisms to accomplish or avoid addressing people. This poses a problem in UbiComp when using ambient modes of input, such as gestures. - How to disambiguate signal-to-noise? - How to disambiguate intended target system? - How to not adress the system?
1. Adress Three example projects
1. Adress The Listen Reader An interactive children’s storybook with a soundtrack that the reader plays by sweeping hands over pages. Embedded RFID tags sense which page is open, and capacitive field sensor measure human proximity to the pages.
1. Adress Augmented Objects Uses sensors and RFID tags. Wave at pickup sensors to initiate action. Digital Voices A computer-to-computer interaction mechanism that uses audile sound as the communication medium. A user can address a suitably equipped system using another Digital Voices enabled device, as long as the devices can ”hear” one another. Moreover, the user hears the communication as it occurs.
1. Adress Problems Failure to to communicate with the system if the sensing fails for any reason. Avoiding unintended communication with devices the user does not want to interact with. Simply getting too close can lead to accidental address which can be dangerous, e.g. a voice activated car phone triggered accidentally could compete for a driver’s attention with serious consequences. However, auidtory feedback from Digital Voices informs the user which devices are responding and helps them to decide whether the response is appropriate.
2. Attention Is the system ready and attending to my actions? • The GUI solution: • Flashing cursor • Cursor move when the mouse is moved • Watch icon
2. Attention Design Challenges How to embody appropriate feedback to show the system’s attention? How to direct feedback to users attention zone?
2. Attention Problems Limited operations.. i.e. system is ignoring to draw attention of the user Failure to execute action.. i.e. absence of visible mode to make user to interact Wrong response
2. Attention Example: Conference Assistance System uses a sensing technology to identify a user and supply information about the user like location, session arrival and departure time. Supplies these information to the other conference attendees. The system always attends the user in range. Gives no feedback to tell the user that its actions are being monitored.
2. Attention Example: Audio Video Media Space Monitors placed next to camera in the public places to tell the inhabitants that they are on camera. By seeing their own images the people can say that the system is attending them.
3. Action How do I effect a meaningful action, control its extent and possibly specify a target or targets for my action. The GUI solution Direct manipulation, menu selection, accelerator keys
3. Action Exposed challenges How to identify and select a possible object for action How to identify and select an action and bind it to the object(s) How to avoid unwanted selection How to handle the complex operations?
3. Action Example: Sensor Chair Gesture-based Sound Control System From MIT Media Lab “Brain Opera” On Multiple Actions Multiple Objects
3. Action Example: SenseTable Augmented system supporting dynamic binding & unbinding of actions to objects From MIT Media Lab On Using Physical Traits of Objects for Multiple actions & Objects
4. Alignment How do I know that the system is doing the right thing? The GUI solution Echoing input text and formatting, wire-frame, outlines, progress bars, highlighting changes in a document, listing sent messages and so on.
4. Alignment Challenges Q: Show to make system state perceivable? A: Graphical and audio information in the space of action Q: Show to direct timely and correct feedback? A: Slow down the machines time frame to the same as people's time frame.
5. Accident How do I avoid mistakes? The GUI solution Words spelling correctionPrintjobs has cancel buttons Saving over old files -> popupbox
5. Accident Challenges Q: How do I stop or cancel a system action in progress? Q: In order to correct mistakes they have to be visible in time, before it is too late. No good solution..
Questions Can you see any of these problems in your projects? If so, how will you plan to solve them?