1 / 62

Collaborative Observation and Localization

Explore methods for combining observation data from multiple robots to enhance accuracy and reduce uncertainty in object position estimation. Learn about sensor fusion, Gaussian representations, matrix multiplication, and calibration techniques. Discover how collaborative localization can improve efficiency and robustness in various applications.

ethatcher
Download Presentation

Collaborative Observation and Localization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Collaborative Observation and Localization Stuff to know for the final project

  2. Motivation • More Complete World view • Reduce Uncertainty • Increase Accuracy • More Robust • Amortize cost of expensive sensors

  3. Collaborative Observation Distributed Sensor Fusion for Object Position Estimation by Multi-Robot Systems Ashley W. Stroupe Martin C. Martin Tucker Balch

  4. Goal • Define a method for how to combine data of two observers to more accurate data. • Observations are instantaneous at different vantage points • Be efficient to work in fast-based environment

  5. Related Work • Little work in merging data • Some fill unsensed data • Merging sequential observations for Kalman filter tracking and occupancy grid updates • Usually based on discretizing the world • Use historical data

  6. Representation • 2D Gaussian • Mean center location of the object • Standard deviation uncertainty

  7. Combining • If assume independent can multiply • How to multiply?

  8. Combining • Gaussian can be represented as a matrix • Use Matrix multiplication

  9. Some math

  10. Problem • The Gaussians are easily defined relative to each robot. • To combine they need to be a global coordinate system. • How to convert to canonical form?

  11. Algorithm • Convert to Local Matrix Form • Rotate to Canonical Form • Multiply with other observation • Extract Angle • Rotate to local form • Extract mean center and noise

  12. More crazy math

  13. Even More Crazy Math

  14. Basic Assumption • Sensor error independent and has normal distribution • Assume perfectly localized • Coordinate frames are coincident

  15. Example

  16. Experiment • Three robots • 1.5 meters from global origin • On global axis

  17. Experimental Assumptions • Assume observation simultaneous • Objects Unique • On Ground Play

  18. Experimental Hardware • Cye Robots • Trailer and Drive Section • Onboard motor processor • Image Processing on Pentium 266 • Communicate by wireless Ethernet

  19. Error Calibration • Camera Calibration • Flatfish – camera aberration • Experiment with known locations • Results used to find Guassian Parameters

  20. Results • Not point by point more accurate with merged results • More consistent

  21. Data

  22. Data

  23. Test Application • Location and Retrieval of unseen objects • One robot can see target, but not reach it • The other robot can not initially see target • Must communicate to retrieve target • The robots retrieve the target. There is much rejoicing.

  24. Test Application • Blind Robot Tracking • Three robots track ball • When one is blinded by box still able to track the ball • Audiences every where dumbfounded

  25. Test Application • Robot Soccer • Robo-cup soccer • Ignore widely conflicting observations • Ignore localization uncertainty • Result • Used as starting point when ball location lost • Robots able to find ball faster

  26. Conclusion • Use Gaussian distributions based on Bayes’ Rule and Kalman filter theory to fuse observation • Applicable to any sensor • Need to resolve coordinate frame differences • Need to resolve uncertain localization • Need to remove ground plane limitation

  27. Collaborative Multi-Robot Localization Dieter Fox Wolfram Burgard Hannes Kruppa Sebastian Thrun

  28. Goal • Be able to do global localization • Have more accurate localization • Amortize the cost of sensors • More efficient localization • Work in known environment

  29. Related Work • Many look at single robot localization • Adjust for odometer errors. Require initial position. • Use Kalman Filters and second moment of density or Markov Localization • Lock Step multi-robot localization • Little raw sensor info used

  30. Method • Markov Localization • Detection Models that calculate noise and uncertainty • Statistical Sampling • Density Trees

  31. Some Notation • For N Robots dn 1<=n<= N is the is a sequence of three types of data • Odometry measurements - an • Environment measurents – on • Detections – rn

  32. Markov Localization • Each robot maintains belief distribution over it’s position • Uses dead reckoning and environment. Detections not considered initially • Belief at time 0 initialized • Belnt(L)is the belief of robot n about it’s location at time t

  33. Math Derivations • How to calculate belief given environment data

  34. More Math derivation • How to update belief given odometry info.

  35. More Math • To add in the detection of other robots • Assume P(L1……Ln | d) = P(L1|d) ..P(Ln|d) • Not completely accurate. Work in practice and much easier

  36. More Math Derivations • To calculate the effect of another robots belief about my location

  37. Limitations of Math • Does not use negative sights • Repeated use another robots belief will use the same evidence multiple times • only report detection after significant movement from last detection

  38. How to represent • Robot positions are a continuous valued • How to model belief of all these positions? • Monte Carlo Localization • Sample representation • Sample/importance resample to propagate beliefs

  39. Monte Carlo Localization • A set of K samples in S • S = {si | i = 1..K} • si = <li ,pi> • li = < x, y, Angle> • pi = numerical weighting factor • sum of all pi in S = 1

  40. Monte Carlo Localization • Robot Moves • Generate K new sample • Draw sample s randomly from S based on ps • ls generated from ls using P(ls | l’s, a) • ps = 1/K

  41. Monte Carlo Localization • For each new sample s • ps = P(os | ls) • All p are then normalized to have them sum to 1 • Small number of random uniform samples added to overcome local minima

  42. Monte Carlo Localization • Trade off in sample size between computational efficiency and accuracy • Focuses resources on high likely hood

  43. Example • Example

  44. Multirobot MCL • Both robots keep samples • How to relate these two sample sets when combining detection beliefs? • Density Trees

  45. Density Tree • Adaptively sub-divides space into rectangular regions • Start with one giant rectangle. • If fails the stopping condition rectangle is split along long side. • Continues recursively

  46. Density Tree • Example

  47. Density Tree • Density of each rectangle is defined as • Density = Sum of all p in rect/Volume of rect. • This density is used to get this update rule

  48. Detection Algorithm • Use camera and vision processing to detect presence of a robot • Scan laser scan data to find convex structure • Use the laser scan at center of convex shape to find distance and angle.

  49. Detection Model • Detection Function

  50. Learn Detection Error • Ran training peroid to learn P(Lm|Ln,rn) • When check for robot. Record reported location comapred with actual location • Actual location determined by MCL • Data used to compute Gaussian. • 6.7% chance miss detection • 3.5% chance incorrect detection

More Related