1 / 78

RoboCup: An Application Domain for Distributed Planning and Sensoring in Multi-robot Systems

Enrico Pagello President of the International IAS-Society. RoboCup: An Application Domain for Distributed Planning and Sensoring in Multi-robot Systems. IAS-Lab Intelligent Autonomous Systems. The University of Padua. Presentation Outline. What a Cooperative Multi-Robot System should be

laasya
Download Presentation

RoboCup: An Application Domain for Distributed Planning and Sensoring in Multi-robot Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enrico Pagello President of the International IAS-Society RoboCup: An Application Domain for Distributed Planning and Sensoring in Multi-robot Systems IAS-Lab Intelligent Autonomous Systems The University of Padua E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  2. Presentation Outline • What a Cooperative Multi-Robot System should be • T. Arai, E. Pagello, L. Parker. Editorial: Advances in Multi-Robot Systems. IEEE/Trans. On R&A, Vol. 18, No. 5, pp 655-661, October 2002 • Scientific perspective in RoboCup with respect to Cooperation • Research on RoboCup at IAS-Lab, The University of Padua • Distributed Sensoring: An Omnidirectional distributed vision sensor • E. Menegatti, A. Scarpa, D. Massarin, E. Ros, E. Pagello: Omnidirectional Distributed Vision System for a Team of Heterogenueous Robots. Proc. of IEEE Workshop on Omnidirectional Vision (Omnivis’03), Praga June 2003 • E. Menegatti, A. Pretto, and E. Pagello Testing Omnidirectional Vision-based Monte-Carlo Localization under Occlusion. Proc. Of IROS-2004, Sendai (Japan), Sept 29 - Oct 2, 2004 • Cooperative Robotics: An Hybrid Architecture a MSL Team • A. D’Angelo, E. Menegatti, and E. Pagello: How a cooperative behavior can emerge from a robot team. Proc. of DARS’04, Toulouse June 2004 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  3. Why Multi-Robot Systems (MRS) have been so successful ? • In challenging application domains, MRS can often deal with tasks that are difficult, if not impossible, to be accomplished by an individual robot. • A team of robots may provide redundancy and contribute cooperatively to solve the assigned task, or they may perform the assigned task in a more reliable, faster, or cheaper way beyond what is possible with single robots. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  4. What a Cooperative Multi-Robot System is ? • Cooperative Robotics research field is so new that no topic can be considered mature • Early research goes to • Cellular Robotics by [Fukuda, IECON 1987] and Cyclic Swarm by [Beni, Intelligent Control 1988] • Multi-Robot Motion Planning by [Arai, IROS 1989] • ACTRESS Architecture by [Asama, IROS 1989] • [Dudek, Autonomous Robots 1996] and [Cao, Autonomous Robots 1997] gave a taxonomy • In [Arai, Pagello, & Parker, IEEE/Trans. 2002]we identify several primary research areas E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  5. Research roots for Cooperative Multi-Robot Systems • Cooperative mobile robotics research began after the new behavior-based control paradigm • Brooks 1986, Arkin 1990 • Since behavior-based paradigm is rooted in biological inspirations, many researchers found it instructive to examine the social characteristics of insects and animals • The most common application is using simple local control rules of various biological societies, like ants, bees, and birds, for similar behaviors in MRS • MRS can flock, disperse, aggregate, forage, and follow trails, etc. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  6. New and interesting research issues • The dynamics of ecosystems has been applied to MRS to demonstrate Emergent Cooperation • Cooperation in higher animals, such as wolf packs, has generated significant study in Predator-Prey Systems • Pursuit policies relay expected capture times to the speed and intelligence of the evaders and the sensing capabilties of the pursuers • Competition in MRS, such as in higher animals including humans, is being studied in domains such as multi-robot soccer. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  7. Inherently cooperative tasks • A particular challenging domain for MRS is the one whose tasks are inherently cooperative, that is, tasks in which the utility of the action of one robot is dependent upon teammates’ current actions • Inherently cooperative tasks cannot decomposed into independent sub-tasks to be solved by a DARS • Team success throughout task execution is measured by the by the combined actions of the robot team, rather than by individual actions • More recently identified biological topics of relevance are: • Imitation in higher animals to learnnew behaviors • Physical Interconnectivity by insects such as ants, to enable collective navigation over challenging terrains • How to maintain Communication in a distributed animal society E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  8. Communication versus Cooperation • Communication issue in MRS started since the inception of Distributed Autonomous Robots Systems (DARS) research. • Distinctions between Implicit and ExplicitCommunication are usually made: • Implicit communication occurs as a side-effect of other actions, or “through the world” • Explicit communication is a specific act designed solely to convey information to other robots on the team. • Communication affects the performance of MRS in a variety of tasks • even a small amount of information can lead to great benefit • The challenge is to maintain a reliable communication even when connections between robots may change dynamically and unexpectedly • setting up and maintaining distributed network E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  9. Architecture and Task Planning • Research in DARS has focused on the development of architectures, task planning capabilities, and control addressing the issues of: • action selection • heterogeneity versus homogeneity of robots • achieving coherence amidst team actions • resolving conflicts, etc. • Each architecture focuses on providing a specific type of DARS capability: • fault tolerance • swarm control • role assignment, etc. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  10. Architecture and Task Planning, Localization and Mapping • Research in DARS has focused on the development of architectures and task planning capabilities, where each architecture focuses on providing a specific type of distributed capability • Initially, most of the research took an existing algorithm developed for single robot mapping, localization, or exploration, and extended it to MRS • [Fox et al., Autonomous Robots 2000] took advantage of a MRS to improve positioning accuracy beyond single robot to develop a colaborative multi-robot exploration • Only more recently, researchers have developed new algorithms that are fundamentally distributed, to take advantage from MRS E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  11. RoboCup Soccer :The oldest RoboCup standard problem • Middle-size League • Building, maintaining, and programming a team of fully autonomous robots • High speed moving (>2m/s) • Large field (12m X 8 m) • Sensing the environment • Cooperation abilities E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  12. RoboCup Soccer :From simple moves towards complex actions • Middle-size League: progresses from 1997 to 2003 USC (USA) - Osaka Univ. (Japan) Nagoya 1997 Isfahan Univ (Iran) - AIS (Germany) Padua 2003 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  13. Middle-size League:RoboCup2003 : Vision and Localization • Vision is still a key research issue in MSL • All teams used color information • Half of teams use shape detection • Even less can make edge detection • Auto-color calibration is a hot topic, especially to relax lightning condition • Robot Self-Localization is mainly based on Visual Landmarks • Most teams detect corner posts • Half of teams detects also field lines • Several teams use statistical methods E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  14. Middle-size League:RoboCup2003 : Control Architectures • One half of teams use reactive control architectures (behavior based robotics) • One third of teams use their own architectures like: Dual Dynamics, two-level FSMs, Fuzzy Approaches, etc. • Several teams develops advanced robot skills using learning • Only a few teams extends reactive motion control with path planners based mainly on potential field methods or similar E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  15. Research on RoboCup topics@ IAS-Lab, Dept. of Information Engineering, The University of Padua • Soccer-robot design • ODVS (Omnidirectional Distributed Vision System) • MonteCarlo Localization using omni-vision • Coordinated behaviors E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  16. Evolving the Artisti Veneti Team • First platform for MSL was designed on 1998 over a Pioneeer1 base • Second and third platforms evolved from a Pioneer1 to a Pioneer2 base • Third platform is a Golem robot • We shifted from 2-wheeled robot, with a directional camera, towards omnidrive and omnivision platforms • Fourth platform ehnance the circular movement of original goalkeeper E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  17. Convex Mirror Perspective camera Perspex Cylinder (support) Omnidirectional Sensor Mirror Camera E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  18. Y Y Y Y Y Y Y Y Y Y Vertex Vertex Vertex Vertex Vertex Vertex Vertex Vertex Vertex Vertex x y Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole Pin Hole P dMax dMax dMax dMax dMax dMax dMax dMax dMax d1 d1 d1 d1 d1 d1 d1 d1 d1 X X X X X X X X X X DMin DMin DMin DMin DMin DMin DMin DMin DMin DMax DMax DMax DMax DMax DMax DMax DMax DMax D1 D1 D1 D1 D1 D1 D1 D1 D1 How to design a mirror Made by F. Nori at IAS-Lab Mirror profile construction E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  19. Mirror’s three parts: Measurement Mirror Marker Mirror Proximity Mirror Our robot mirrors The task determines the mirror profile Mirror Profile E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  20. A mirror designed forAIS – Fraunhofer Institut (Germany) • Three-parts mirror • Tailored on their mobile robot • Satisfing customer requirements E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  21. For Goalie: Locate the ball Identify the markers See the defended goal For Attacker: Locate the ball Identify the markers See both goals Lighter mirror In the case of Soccer RobotsRequirements and profile E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  22. Characteristics: Chassis shaped for omnidirectional vision Mirror profile designed for the robot’s task Mirror Camera Heterogeneous Robots E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  23. Heterogeneous Vision Systems Peripheral vision Foveal vision E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  24. Heterogeneous Vision Systems OVA’s view PVA’s view E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  25. Features and Events for Omnivision Events: • A new edge • A disapearing edge • Two edges 180° apart • Two pairs od edges 180° apart Features: • Vertical edges E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  26. P1 P2 P5 P4 P3 Omnidirectional Vision and Mapping • It simplifies data interpretation: • Discriminate b/t “turns” and “travels” • Simplify “Exploring around the block” E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  27. Experimental Results • Correct tracking of edges • Recognition of actions • Calculation of the turn angle The path segmentation E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  28. Single Robot Mapping Strategy • Use an omnidirectional vision sensor • Detect topologically meaningful features in the environment • Use Spatial Semantic Hierarchy of Kuipers (SSH) • Build a topological map • Use the map to explore the environment E. Menegatti, E. Pagello, M. Write Using Omnidirectional Vision within the Spatial Semantic Hierarchy IEEE/ICRA2002,Washington, May 2002 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  29. Multi-robot mapping strategy • Every robot builds its own local map • When two robots can see each other, they share their local maps by matching their current views: • Identifying the objects seen by both robots • Estimating their relative distance and orientation • If the match is successful, they transmit their own local map to the teammate • Each robot connects this new local map to its local map E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  30. Some hints • Every robot carries on an independent exploration by using use a misanthropy robotstrategy i.e. • Follow a direction of exploration that increases the distance from the visible teammates • Use redundacy of the observers and observation to improve the map • Exploit the heterogeneity of the robots more deeply in tasks too expensive (or not achievable) for homogeneous robots • Use maps of non previoulsy met robots to navigate. The bridge is the common starting location. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  31. ODVS for Navigation We realised a network of smart uncalibrated sensors able to learn how to navigate a blind service robot in an office like environment The sensors learn by observing the robot motion. The first stage is supervised, then the knowledge is propagated autonomously exploiting the overlapping field of view of the sensors VA1 VA2 E. Menegatti, E. Pagello, T. Minato, T. Nakamura, H. Ishiguro “Toward knowledge propagation in an omnidirectional distributed vision system” Proc. of 1st Int. Workshop on Advances in Service Robotics (ASER'03), Bardolino (Italy), March 2003 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  32. Implicit Communication • VA1 learns its own mapping • VA1 moves the robot in the field of view of VA2 • VA2 observes the robot • VA2 receives from VA1 the motor commands sent to the robot • VA2 trains its own neural nets to build its own mapping VA1 VA2 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  33. Why Monte-Carlo Localization • Monte Carlo Localisation (MCL) as a very successful approach • Applying MCL to omnidirectional vision used as a range finder • An experimentally generated sensor model • The fusion of sensor data for pose likelyhood calculation • A global localization experiment in a RoboCup Environment • Robustness to occlusion • An application to a non-roboCup Environment E. Menegatti, A. Pretto, E. Pagello A New Omnidirectional Vision Sensor for Monte-Carlo Localization Proc. of 8th RoboCup Int. Symposium, Lisbon (Portugal), July 2004 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  34. MCL (Monte Carlo Localisation) in one page • MCL is a probabilistic technique to estimate the robot’s positions from the odometric and sensor data • We calculate the probability density of robot positions (the belief) by a set of weighted samples • The samples are localisation hypothesis • When the robot moves, everytime a new image is processed, the samples are moved in accordance with the motion model • To every sample is associated a weight proportional to the probablity that the robot is occuying that position • When the robot grasps new data, the sample weights are updated according to the sensor model • At every step a resampling eliminates the less probable positions E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  35. Our approach to MCL • Starting from the work of [Kröse, IVC 2001] and [Burgard, ICRA 2002] , we realised an omnidirectional image-based Monte Carlo Localisation system for a large office environment [Menegatti, RAS 2004] • We decided to port a similar approach in RoboCup, but image-based localisation is not suited due to: • (i) many occlusions • (ii) an high dynamical environment • (iii) high computational costs for processing the whole image • Previous works in RoboCup implemented MCL using complex method for landmark or feature detection, and need to cope with dynamic occlusions [Utz, RoboCup-IV 2001], [Enderle, IAS2000] • We fell back on range-scanner, like [Fox, JAIR 1999][Thrun, AI 2000] E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  36. Our omnidirectional enhanced range finder • We detect colour transitions of interest: • G- W, G -Y, G - Blue • We detect occlusion: • G - Black E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  37. Probability distribution of the robot’s pose • The scan of every colour transition of interest (here Green-White) • gives a probability distribution in the whole field. • Black dots = high probability , White dots = low probability • Note the symmetry in the environment E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  38. Sensor Model.1 - Calculating p(o|l) • p(o|l) is the probability to have a the scan o at the location l • oi is the measurement along the single ray i of the scan • Omni-Scan: • One scan per colour transition of interest • Every scan has 60 rays (one every 6°) • Every ray has one receptor every 4 cm from 10 cm to 4 meters • When a transition is found the ray is not searched anymore {i = 1:60} E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  39. Expected and Real Scans E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  40. Sensor Model.2 – estimating p(oi|l) • Taking 2000 images in different positions in the field • For every ray of the 2000 scans • Computing theactual distance of the colour transition (here Green-White) • Estimating the distance of the colour transition with the vision software • Running the Expectation Maximisation (EM) to fit the experimental data separately for every colour transition Expected Distance E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  41. Sensor Model.3 –Results • The resulting probability density calculated for every colour transition is the sum of three components: • Erlang distribution (accounting for image noise and imperfect colour quantization) • Gaussian distribution centered around the expected distance • Discrete density (accounting for missing the transition) Gaussian Discrete Erlang E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  42. Combining the three probability distributions Probability distribution for the green-white ToI Probability distribution for the green-blueToI Probability distribution for the green-yellow ToI Resulting Probability distribution for the robot’s pose E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  43. Global Localisation Step 0 Step 4 Step 6 Step 18 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  44. Sensor Occlusion.1 E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  45. Sensor Occlusion.2 Our system is able to recognise occlusion by other robots as a Green-Black ToI along a ray These rays are labeled as FAKE_RAY (f) and discarded from the calculation of p(o|l) We called this process ray discrimination Our system scans with less rays (so less information), but keeps the usable information and avoids using expensive algorithm as distance filters. E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  46. Sperimentation at University Building • We need uniformly colored surfaces, clear color gaps, and uniform light • Red floor, white walls, and gray furnitures • New color transitions: • Red - White, Red - Gray • The omnidirectional image is scanned with 60 rays, one every 6 degrees E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  47. Ideal scans and probabilities in real environments • The ideal scan was different from the real one: • Robot shadow • Mirror deformation • Error in color detection near the door • In the probability map of the environment, there are dark zones everywhere the probability to have the observation is higher: • All cornered zones are darker • The samples closer to the real pose have a higher weight E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  48. Extending the limits of the sensorial horizon of the single agent • The first step: using omnidirectional vision (RoboCup is an example of this) • But, RoboCup proved omnidirectional vision is not enough for highly dynamic environments: • cannot see occluded objects • cannot see very distant objects • To realise a Distributed Vision Systemwe need to share information between the agents of a team E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  49. Omnidirectional Distributed Vision System (ODVS) Tracking multiple moving objects in highly dynamic environments by sharing the information gathered by every single robot • Requirements: • Robots’ only sensor: omnidirectional vision • No use of external computer • Every robot shares its measures • Every robot fuses all measures received • by teammates • Measures can refer to different instants • in time E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

  50. Enhancing the ODVS by fusing multiple observations E. Menegatti, A. Scarpa, D. Massarin, E. Ros, E. Pagello Omnidirectional Distributed Vision System for a Team of Heterogenueous Robots Proc. of IEEE Workshop on Omnidirectional Vision (Omnivis’03), Praga June 2003 • Fusing Multiple Observations from Single Measurements E. Pagello, RoboCup: Distributed Planning and Sensoring in MRS

More Related