470 likes | 580 Views
Scaling Human Robot Teams. Prasanna Velagapudi Paul Scerri Katia Sycara Mike Lewis Robotics Institute Carnegie Mellon University Pittsburgh, PA. Large Multiagent Teams. 1000s of robots, agents, and people Must collaborate to complete complex tasks. Search and Rescue. Disaster
E N D
Scaling Human Robot Teams Prasanna Velagapudi Paul Scerri Katia Sycara Mike Lewis Robotics Institute Carnegie Mellon University Pittsburgh, PA
Large Multiagent Teams 1000s of robots, agents, and people Must collaborate to complete complex tasks Search and Rescue Disaster Response UAV Surveillance
Large Multiagent Teams • Network Constraints
Large Multiagent Teams • Human Information Needs
Network Constraints • Networks affect human interface design • Limited bandwidth • Significant latency • Lossy transmission • Partial/transient connectivity
Network Constraints • How can we design robust tasks? • Feasible under network constraints • Tolerant of latency • Within bandwidth constraints • Robust to changes in information
Network Constraints • Humans are a limited resource
Network Constraints • Humans are a limited resource • Centralized, expensive • Limited attention and workload • Penalties for context switching • Necessary for certain tasks • Complex visual perception • Meta-knowledge
Network Constraints • How do we maximize the effectiveness of humans in these systems with respect to network constraints?
MrCSMulti-robot Control System Status Window Map Overview Video/ Image Viewer Waypoint Navigation Teleoperation
[Velagapudi et al, IROS ’08] Victims Found in USAR Task Number of Victims
[Velagapudi et al, IROS ’08] Task decomposition
Network Constraints • How we divide tasks between agents may affect performance • What is the best way to factor tasks? • Where should we focus autonomy?
Large Multiagent Teams • Human Information Needs
Human Information Needs • Human operators need information to make good decisions • In small teams, send everyone everything • This doesn’t work in large systems
Human Information Needs • Sensor raw datarates • Proprioception • < 1kbps • RADAR/LIDAR • 100kbps – 20Mbps • Video • 300kbps – 80Mbps
Human Information Needs • Can’t transmit every bit of information • Selectively forward data • How do agents decide which pieces of information are important? • Fuse the data • What information are we losing when we fuse data?
Asynchronous Imagery • Inspired by planetary robotic solutions • Limited bandwidth • High latency • Multiple photographs from single location • Maximizes coverage • Can be mapped to virtual pan-tilt-zoom camera
Streaming Mode Panorama Mode Panoramas stored for later viewing Streaming live video [Velagapudi et al, ACHI ’08] Asynchronous Imagery
Panorama 6 Streaming 5 4 Average # of victims found 3 2 1 0 Within 0.75m Within 1m Within 1.5m Within 2m Accuracy Threshold [Velagapudi et al, ACHI ’08] Victims Found
Environmental Factors • Colocated operators get extra information • Exocentric view of other agents • Ecological cues • Positional and scale cues
Conclusion • Need to consider the practicalities of large network systems when designing for humans. • Need to consider human needs when designing algorithms for large network systems.
Cognitive modeling • ACT-R models of user data • Determine • What pieces of information users are using? • Where are the bottlenecks of the system?
Environmental Factors • Colocated operators get extra information • Exocentric view of other agents • Ecological cues • Positional and scale cues
Utility-based information sharing • It is hard to describe user information needs • Agents often don’t know how useful information will be • Many effective algorithms use information gain or probabilistic mass • Can we compute utility for information used by people
MrCSMulti-robot Control System Status Window Map Overview Video/ Image Viewer Waypoint Navigation Teleoperation
Victims Found Number of Victims
Task decomposition Search Navigation
Asynchronous Data • One way to address the latency of networks is to transition to asynchronous methods of perception and control. • Asynchronous imagery • Decouples users from time constraints in control
Asynchronous Imagery • Inspired by planetary robotic solutions • Limited bandwidth • High latency • Multiple photographs from single location • Maximizes coverage • Can be mapped to virtual pan-tilt-zoom camera
Streaming Mode Panorama Mode Panoramas stored for later viewing Streaming live video Asynchronous Imagery
Panorama 6 Streaming 5 4 3 2 1 0 Within 0.75m Within 1m Within 1.5m Within 2m Accuracy Threshold Victims Found Average # of victims found
Tools • USARSim/MrCS • VBS2 • Procerus UAVs • LANdroids • ACT-R
USARSim • Based on UnrealEngine2 • High-fidelity physics • Realistic rendering • Camera • Laser scanner (LIDAR) [http://www.sourceforge.net/projects/usarsim]
MrCSMulti-robot Control System Status Window Map Overview Video/ Image Viewer Waypoint Navigation Teleoperation
VBS2 • Based on Armed Assault and Operation Flashpoint • Large scale agent simulation • “Realistic” rendering • Cameras • Unit movements [http://www.vbs2.com]
Procerus UAVs • Unicorn UAV • Developed at BYU • Foam EPP flying wing • Fixed and gimbaled cameras • Integrated with Machinetta agent middleware for full autonomy
LANdroids Prototype • Based on iRobot Create platform • Integrated 5GHz 802.11a based MANET • Designed for warfighter networking • Video capable
ACT-R • Cognitive modeling framework • Able to create generative models for testing