1 / 15

OmniTouch : Wearable Multitouch Interaction Everywhere

OmniTouch : Wearable Multitouch Interaction Everywhere . Chris Harrison Hrvoje Benko Andrew D. Wilson Presented by: Nesra Yannier. OmniTouch. Shoulder-worn depth -sensing and projection system

karma
Download Presentation

OmniTouch : Wearable Multitouch Interaction Everywhere

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OmniTouch:Wearable MultitouchInteraction Everywhere Chris Harrison HrvojeBenko Andrew D. Wilson Presented by: Nesra Yannier

  2. OmniTouch • Shoulder-worn depth-sensing and projection system • allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and their own bodies (e.g., hands, lap).

  3. Related Work • Touch-based interactivity on arbitrary projected objects challenging • Generally requires a fixed non-tracking projection, calibration of object, or sensing within objects.

  4. Related Work: SixthSense • pico-projectors has enabled a new class of worn, on-body projected interactive systems. • Finger tracking was achieved by wearing fingertip markers  

  5. Related Work: Skinput • Skinput - bio-acoustics to detect finger tap events on the skin. • main limitations: • lack of support for any surface other than the user’s body • inability to detect touch drag movements • lack of support for multitouch. • no surface tracking performed, fixed position of arm

  6. Hardware 1) PrimeSense depth camera,  Objects as close as 20cm can be imaged by this sensor. Kinect a minimum sensing distance of ~50cm did not work. 2) pico-projector   3) depth camera and projector mounted to metal frame, worn on the shoulders, and secured with a chest strap. 

  7. Finger segmentation:  • Key contribution: no calibration or training of environment or user • take a depth map of a scene • compute the depth derivative in the X- and Y-axes • iterate over this derivative image, looking for vertical slices of cylinder- like objects. 

  8. Finger Click detection: • computing the midpoint of the finger path • flood fill towards the fingertip.   • When the finger is hovering above a surface or in free space, the flood fill expands to encompass the entire finger. When the finger contacts a surface, the fill operation floods out into the connecting object. 

  9. Surface Segmentation and Tracking:  • distinct surfaces are segmented by performing a 3D connected components operation on the depth map

  10. Defining Interactive Areas 3 approaches: • use a surface’s lock point and orientation to provide an interface that tracks a surface. (fixed size; as big as the smallest surface) • The system automatically sizes, positions and tracks an interface given the available projection area describing the appropriate location for that surface.  • The system lets the user define the interactive area. 

  11. Applications • phone keypad application  • full keyboard, potentially allowing for text entry on the go   • “post-it” application - write quick notes on palm • ubiquitous map panning and zooming- controlled by finger drags and pinching   • painting application for walls – using left hand as the color pallet

  12. User Study (12 participants) Finger Click Detection: • 96.5% correctly received exactly one finger click event. • 50 trials (0.8%) had no click event (the system missed the participant’s finger click) • 154 trials (2.5%) had two click events (the system incorrectly thought the user clicked twice) • 8 trials (0.1%) had three click events.

  13. They found an average offset of 11.7mm to the left of targets across all conditions and participants • OmniTouchon a wall –as accurate as conventional touchscreens.  • The arm is the least accurate surface– targets 70% larger than a touch screen to achieve the same 95% touch accuracy. • Dragging task - On average, participants deviated from the desired path by just 6.3mm.

  14. Conclusion • OmniTouchcould be the size of a box of matches, worn as necklace or watch. • Goal: to demonstrate that touch input can be achieved on everyday surfaces, including the human body.

  15. Discussion • Other applications? Is visual feedback necessary for this system? Visually Impaired? Combined with other multimodal interactions as well such as audio or haptic feedback? • How would the user save information in this system? For example when they annotate a document is it only for that moment or can they go back to it later? Would the system be able to recognize the documents? • The authors mention the concern about how comfortable people would be interacting with their bodies. Do you think it would be comfortable to use a system like this? If not, how could the system be modified to make it more comfortable for people to use?

More Related