230 likes | 312 Views
Co-evolution of Future AR Mobile Platforms. EU funded FP7: Oct 11 – Sep 14. Paul Chippendale, Bruno Kessler Foundation FBK, Italy. Move away from the Augmented Keyhole. User centric, not device centric. HMDs lock displays to the viewer . But what about handheld displays?.
E N D
Co-evolution of Future AR Mobile Platforms EU funded FP7: Oct 11 – Sep 14 Paul Chippendale, Bruno Kessler Foundation FBK, Italy
User centric, not device centric HMDs lock displays to the viewer But what about handheld displays?
Device-World registration • What is the device’s real-world location? • Which direction is it pointing?
Device-World registration • What is the device’s real-world location? GPS, Cell/WiFi tower triangulation (~10m)
Device-World registration • Which direction is it pointing? Magnetometer, Gyros, Accelerometers (~5-20º) • Mems production variability • Sensors age • Soft/Hard iron influences vary across devices, environments and camera pose
But what about hand-held AR? • Devices becomes an augmented window
User-Device-World registration • What is the device’s real-world location? • Which direction is it pointing? • Where is the user with respect to the screen?
Just wait for better AR devices! Surely if we wait sensor errors will disappear? Unlikely! • Sensor errors are tolerable for non-AR application, handset manufacturers focus on price, power and form-factor Can’t we just model the error in software? Not really! • Platform diversity and swift evolution make error modelling expensive and quickly obsolete
So what can we do? • The AR comunity should work with handset manufacturers and make recomendations • Use computer vision to work with sensors
VENTURI project... • Match AR requirements to platform • Efficiently exploit CPUs & GPUs • Improving sensor-camera fusion by creating a common clock (traditionally only audio/video considered) • Applying smart power management policies • Optimizing AR chain, by exploiting both on-board and cloud processing/storage
Seeing the world • Improve device-world pose by: • Matching visual features to 3D models of the world • Matching camera feed to visual appearance of the world • Fusing camera and sensors for ambiguity reasoning and tracking • Use front facing camera to estimate user-device pose via face tracking
Urban 3D Model matching • Use high resolution building models (e.g. laser scanned) and globally registered to geo-referenced coordinate system • Use 3D marker-less tracking to correlate distinctive features to 3D building models. Subsequent tracking using inertial sensors and visual optical flow
Terrain 3D Model matching • Synthetic model of world rendered from Digital Elevation Models. Salient features from camera feed (depth discontinuities) matched to similar synthetic features.
Appearance matching • Use approximate location to gather nearby images from the cloud • Exploit sensor data to provide a clue for orientation alignment • Computer vision algorithms match feature descriptors from the camera feed to similar features in the cloud images
SLAM + Matching • Simultaneous Localization And Mapping - build a map of an unknown environment while at the same time navigating the environment using the map. • Mapped environment has no real-world scale nor absolute geo-coordinates. Exploit prior approaches to complete registration.
Mobile context understanding • User/environment context estimation: • PDR enriched with vision • User activity modelling • Sensing geo-objects • Harvest/create geo-social content
Context sensitive AR delivery • Inject AR data in a natural manner according to: • environment • occlusions • lighting and shadows • user activity • Exploit user and environment ‘context’ to select best delivery modality (text, graphics, audio, etc.), i.e. scalable/simplify-able audio-visual content
User Interactions • Explore evolving AR delivery and interaction • In-air interfaces: device, hand and face tracking • 3D audio • Pico-projectionfor multi-user, social-AR • HMDs
Prototypes One consolidated prototype at the end of each yearto be evaluated through Use-cases • Gaming - VeDi 1.0 • Blind assistant - VeDi 2.0 • Tourism - VeDi 3.0 Constraints relaxed
VeDi 1.0 Objective: Stimulate software and hardware cross-partner integration and showcase state-of-the-art indoor AR registration Scenario: Multi-player, table-top AR Game resembling a miniature city. Players must accomplish a set of AR missions in the city, that adhere to physical constraints. Software: Sensor-aided marker-less 3D feature tracking. City geometrically reconstructed offline correctly occlusion handling and model registration. Hardware: Demo runs on experimental ST Ericsson prototype mobile platform.
“creating a pervasive Augmented Reality paradigm, where information is presented in a ‘user’ rather than a ‘device’ centric way” https://venturi.fbk.eu FP7-ICT-2011-1.5 Networked Media and Search Systems End-to-end Immersive and Interactive Media Technologies Co-ordinated by Paul Chippendale, Fondazione Bruno Kessler