1 / 36

Perceptive Context for Pervasive Computing Trevor Darrell Vision Interface Group MIT AI Lab

Perceptive Context for Pervasive Computing Trevor Darrell Vision Interface Group MIT AI Lab. MIT Project Oxygen. A multi-laboratory effort at MIT to develop pervasive, human-centric computing Enabling people “to do more by doing less,” that is, to accomplish more with less work

Download Presentation

Perceptive Context for Pervasive Computing Trevor Darrell Vision Interface Group MIT AI Lab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Perceptive Context for Pervasive ComputingTrevor DarrellVision Interface GroupMIT AI Lab

  2. MIT Project Oxygen A multi-laboratory effort at MIT to develop pervasive, human-centric computing Enabling people “to do more by doing less,” that is, to accomplish more with less work Bringing abundant computation and communication as pervasive as free air, naturally into people’s lives

  3. Human-centered Interfaces • Free users from desktop and wired interfaces • Allow natural gesture and speech commands • Give computers awareness of users • Work in open and noisy environments • Outdoors -- PDA next to construction site! • Indoors -- crowded meeting room • Vision’s role: provide perceptive context

  4. Perceptive Context • Who is there? (presence, identity) • What is going on? (activity) • Where are they? (individual location) • Which person said that? (audiovisual grouping) • What are they looking / pointing at? (pose, gaze)

  5. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  6. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  7. Person Identification at a distance • Multiple cameras • Face and gait cues • Approach: canonical frame for each modality by placing the virtual camera at a desired viewpoint • Face: frontal view, fixed scale • Gait: profile silhouette • Need to place virtual camera • explicit model estimation • search • motion-based heuristic  trajectory • We combine trajectory estimate and limited search

  8. Virtual views • Input • Profile silhouette: • Frontal Face:

  9. Examples: VH-generated views • Faces: • Gait:

  10. Effects of view-normalization

  11. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  12. Plan view Foreground Range Intensity Range-based stereo person tracking • Range can be insensitive to fast illumination change • Compare range values to known background • Project into 2D overhead view • Merge data from multiple stereo cameras.. • Group into trajectories… • Examine height for sitting/standing…

  13. Visibility Constraints for Virtual Backgrounds virtual background for C1

  14. Virtual Background Segmentation Sparse Background New Image Detected Foreground! Second View Virtual Background for first view Detected Foreground!

  15. Points -> trajectories -> active sensing Spatio- temporal points Active Camera motion Microphone array Activity classification trajectories

  16. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  17. Audio input in noisy environments • Acquire high-quality audio from untethered, moving speakers • “Virtual” headset microphones for all users

  18. Vision guided microphone array Cameras Microphones

  19. System flow (single target) Video Streams Vision-based tracker Audio Streams Gradient ascent search in array output power Delay-and-sum beamformer

  20. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  21. Audio-visual Analysis • Multi-modal approach to source separation • Exploit joint statistics of image and audio signal • Use non-parametric density estimation • Audio-based image localization • Image-based audio localization • A/V Verification: is this audio and video from the same person?

  22. Audio-visual synchrony detection

  23. Audio associated with left face Audio associated with right face AVMI Applications • Image localization from audio + image variance AVMI • Audio weighting from video (detected face) • New: Synchronization Detection!

  24. Audio-visual synchrony detection MI: 0.68 0.61 0.19 0.20 Compute confusion matrix for 8 subjects: No errors! No training! Also can use for audio/visual temporal alignment….

  25. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction

  26. Face pose estimation • rigid motion estimation with long-term drift reduction

  27. I I t + 1 I t Z Z t + 1 Z t Brightness and depth motion constraints Parameter space yt = yt-1

  28. open loop 2D tracker closed loop 2D tracker New bounded error tracking algorithm Influence region Track relative to all previous frames which are close in pose space

  29. Closed-loop 3D tracker Track users head gaze for hands-free pointing…

  30. Head-driven cursor • Related Projects: • Schiele • Kjeldsen • Toyama • Current application for second pointer or scrolling / focus of attention…

  31. Head-driven cursor Method Avg. error. (pixels) Cylindrical head tracker 25 2D Optical Flow head tracker 22.9 Hybrid 30 3D head tracker (ours) 7.5 Eye gaze 27 Trackball 3.7 Mouse 1.9

  32. Gaze aware interface • Drowsy driver detection: head nod and eye-blink… • Interface Agent responds to gaze of user • agent should know when it’s being attended to • turn-taking pragmatics • anaphora / object reference • First prototype • E21 interface “sam” • current experiments with face tracker on meeting room table • Integrating with wall cameras and hand gesture interfaces…

  33. “Look-to-talk” Subject not looking at SAM ASR turned off Subject looking at SAM ASR turned on

  34. Vision Interface Group Projects • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Vision guided microphone array • Joint statistical models for audiovisual fusion • Face pose estimation: rigid motion estimation with long-term drift reduction • Conclusion and contact info.

  35. Conclusion: Perceptual Context Take-home message: vision provides Perceptual Context to make applications aware of users.. • So far: detection, ID, head pose, audio enhancement and synchrony verification… Soon: • gaze -- add eye tracking on pose stabilized face • pointing -- arm gestures for selection and navigation. • activity -- adapting outdoor activity classification [ Grimson and Stauffer ] to indoor domain…

  36. Contact Prof. Trevor Darrell www.ai.mit.edu/projects/vip • Person Identification at a distance from multiple cameras and multiple cues (face, gait) • Greg Shakhnarovich • Tracking multiple people in indoor environments with large illumination variation and sparse stereo cues • Neal Checka, Leonid Taycher, David Demirdjian • Vision guided microphone array • Kevin Wilson • Joint statistical models for audiovisual fusion • John Fisher • Face pose estimation: rigid motion estimation with long-term drift reduction • Louis Morency, Alice Oh, Kristen Grauman

More Related