170 likes | 260 Views
Vision-aided Landmark Routing and Localization. Aaron Ballew Aleksandar Kuzmanovic C. C. Lee Shiva Srivastava Nikolay Valtchanov Northwestern University, Evanston IL, USA Dept. of Electrical Engineering and Computer Science July 4 th 2012. Indoor GPS.
E N D
Vision-aided Landmark Routing and Localization Aaron Ballew Aleksandar Kuzmanovic C. C. Lee Shiva Srivastava Nikolay Valtchanov Northwestern University, Evanston IL, USA Dept. of Electrical Engineering and Computer Science July 4th 2012
Indoor GPS • Many facilities publish free floor plans online Hyatt Regency O’hare
RF-derived approaches • Triangulation • Delay, Angle, RSSI • RF Signatures • Beacons • Impulse response
Use logic • What happens in practice? • A person reports what they see • Take advantage of relationships among identifiable features of a room • Absolute precision is not as important as a person comprehending where he is
Important Definitions • Isovist: The visible area from a location’s perspective • Vi,j = {…}, set of coordinates visible from point (i,j) • V’i,j = {…}, set of coordinates invisible from point (i,j) • Feature: An identifiable landmark, e.g. cash register, bathroom, elevator… • Feature Vector: [f1 f2…fh], where fiÎ{0,1} • fi == 1, fi is visible • fi == 0, fi is invisible • Region: Subset of coordinates sharing an identical feature vector G B R
A simple example: 3 features • If feature A ÎVi,j, then point (i,j) ÎVA • i.e. if you can see a feature, then that feature can see you. • In general, for all features fp reported visible, and all fq reported invisible: • Location • 2h (or 2h-1) locatable regions, for h total features in the environment
4 features • In general, for all features fp reported visible, and all fq reported invisible: • Location • 2hpossible vectors, so locatable regions grow in O(2h) • Corollary: Average region size decreases as h increases • |region| = 11 < 2h-1 = 15 • Conjecture: For h>3, there is no 2D arrangement of features that generates all 2h-1 locatable regions
User error • Mistakes happen • Type I Error: Report an invisible feature is visible • Possible, but rare • Example: confusing “stairs” with “escalator.” • Type II Error: Report a visible feature is invisible • Not only possible, it’s probable • Our study revealed ~50% hit-rate on noticing features • Assume positive sightings are trustworthy, and negative sightings are completely untrustworthy • Sacrifice all info gained from unsighted features • All information comes from accumulation of positive sightings
Accumulative 2 to 5 reports Accumulative full range Accumulative up to 5 reports Exponential & Perfect Avg. located area vs. h features • Important result • The plausible range of operation (2:5 sightings) performs in line with the unlimited range of operation
Finding your way • Model environment as a network • nodes (features) and • links (intervisibilities) • At each hop, report what you see • app recommends a new next hop • Only adjacent subset of features are offered at each hop • Makes the list much smaller • If you don’t sight a feature, the link is down • Ineligible as a next hop
Hop-by-Hop SPF Routing • Measurements of Characteristic Distance and Infinite Paths • The more sensitive behavior is infinite paths, not hop count
Hop-by-Hop SPF Routing • Sharp transition where the graph is connected “almost surely” • The threshold corresponds to
Characteristic dist. and Infinite paths • Example of agreement between simulated random graph and a real graph of a test location • The more sensitive behavior is infinite paths, not hop count Simulated Random Network h = 40, p = 0.25 Real Test Network h = 38, p = 0.283
Field Study • Large Hotel/Convention Center (with permission) • h = 38 features • p = .283 edge density • 10 volunteers with no prior knowledge • Test 1 – sighting features • Each subject tested from multiple vantage points • Positive sightings rate pv= 0.496, with 90% confidence pv > 0.46 • Test 2 – usability • Wayfinding task A B • Tracked user experience • Willingness to use the tool • Ability to use the tool • Feedback & suggestions
User/App interaction • Based on your input to Part I • I know where you are • More important – I know what you see • Knowing what you see, I can tell you to walk over to it • Application picks the best “next hop” on the way to the destination • Repeat this in a simple way until the user is within L.O.S. of destination
Conclusions • More features in the environment gives better location precision, even with the same number of sightings • Constraining to 2:5 sightings behaves similarly to unconstrained case, i.e. plausible tracks feasible • Number of hops is less important than whether you get there • p*pv>(lnn)/n is an indication of high success rate
Vision-aided Landmark Routing and Localization Aaron Ballew Aleksandar Kuzmanovic C. C. Lee Shiva Srivastava Nikolay Valtchanov Northwestern University, Evanston IL, USA Dept. of Electrical Engineering and Computer Science July 4th 2012