130 likes | 312 Views
A World Model for Multi-Robot Teams With Communication. Maayan Roth Douglas Vail Manuela Veloso. Introduction. Multi-robot team to share information. Challenge is to merge information. World Models: Individual shared . Robotic Soccer. Dynamic environment. Before 2002:
E N D
A World Model for Multi-Robot Teams With Communication Maayan Roth Douglas Vail Manuela Veloso
Introduction Multi-robot team to share information. Challenge is to merge information. World Models: • Individual • shared
Robotic Soccer Dynamic environment. Before 2002: • teams build models without sharing information. • No hardware communication Attempts: • Agilo RoboCuppers (2000) used Kalman Filter to fuse information: global world model treated as local for each robot. • CS Freiburg used an off-board server to update teammates.
Individual World Model Data structure: • wm_position • wm_heading • wm_ball • wm_teammate * • wm_opponent * Elements are updated by information from: • Vision • Communication • Vision + Communication
Individual World Model update • Robot’s position: (wm_position + wm_heading) comes from localization module, which gets input from vision. • wm_opponent : only from vision • wm_teammate: only from shared information (com.) • wm_ball: position from vision and shared information
Update From shared information Problem: - Update happens .5 sec - Latency 5 sec
II- Shared World Model Data structure (Vectors): • swm_position: teammates positions. • swm_ball: teammates’ estimate of ball position. • swm_goalie: flag indicating goal keeper / not. • swm_sawball: flag if saw the ball in the last time step
III- Experiments SEARCH_PIN: behavior executed when the ball is “lost”
Questions • Has a system similar to this been applied to the NAO robots as well? • Has the communications hardware improved in the newer robot models? <> Nao Specs • When agent or robot gets information from other agents and wants to update world model, how it can be sure that information it gets is more accurate than its own information on world model? <> relying on uncertainty threshold.
Why the opponent position vector is determined entirely from vision information, while the teammate position vector is determined entirely from shared information communicated by the teammate robots? <> Teammate’s position can extracted from vision but it is noisy. Why opponents’ then? Errors are tolerable since teammate’s position is more important. • How does a robot differentiate its teammates by vision module? (If needed)