1 / 88

Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments

Lecture 7: Robot Teamwork Gal A. Kaminka galk@cs.biu.ac.il. Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments. This week, on Robots…. We’re starting to look at multiple robots Relevant Fields: Multi-agent systems, Multi-robot systems

lobo
Download Presentation

Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7: Robot Teamwork Gal A. Kaminka galk@cs.biu.ac.il Introduction to Robots and Multi-Robot SystemsAgents in Physical and Virtual Environments

  2. This week, on Robots…. We’re starting to look at multiple robots Relevant Fields: Multi-agent systems, Multi-robot systems Focus: Teams, small groups Common goals Complex coordination Complex tasks

  3. Developing Teamwork • Agent teams are everywhere • A (Biased?) historical perspective • Sense-Think-Act: What's in Think? • How Neaties view Teamwork: Joint-Intentions • How Scruffies view Teamwork: ALLIANCE, STEAM • Theory-inspired Teamwork Engines

  4. Agent Teams Are Everywhere:Teamwork is Important • Nature • Formations, flocking, pack hunting, software development • Robotic nature imitations, explorations, soccer • Internet, Intranets • Routing, distributed applications, groupware • Workflow, cooperating information agents • Virtual environments for training, simulations • Human-computer interactions

  5. The Sense-Think-Act Cycle:What's in Think (for scruffies) in late 80's? • No need to Think: If sensors read X, then do Y • Reactive Camp (Brooks 1986, Schoppers 1987) • Limited thinking: Behavior-based control • Behaviors may have state, memory, procedures • Arkin, Firby (1986), Maes, ... • Deep thinking: integrated planning, monitoring • e.g., IPEM (1988) • Hybrid architectures (e.g., Gat 1992)

  6. The Sense-Think-Act Cycle:What's in Think (for neaties) in late 80's? • "The Old View" • Plans as sequences of actions for execution • Plans as mental attitudes (Pollack 1992) • Plans as recipes: Some get executed, some just known • BDI: Belief-Desire-Intention (approximately): • Belief: What the agent knows • Desire: What the agents ideally wants to see happening • Intention: What the agents actually acts towards • Commitments

  7. An Historical Perspective on Teamwork:From a Single Agent to Multiple Agents Subjective Reactive-Plans, Architectures Mental Attitudes, Belief, Desire, Intention (BDI) Plans as Attitude Integrating Planning,Execution, Monitoring, Re-Planning, Architectures Behavior-Based Architectures Neatness Scruffiness Time '86 '90 '96

  8. Introduction of Multi-Agent Settings:A Change in Perspectives? • Multi-agent env. become more pervasive • Late '80s, Early '90s • Philosophical influences on NLP, challenging test-beds • Everyone responds using what they know: • Social reactive/behavior based applications • Multi-agent planning, negotiations • Social intentions, commitments, beliefs, desires We’ll examine theories and behavior-based architectures

  9. An Historical Perspective on Teamwork:From a Single Agent to Multiple Agents Subjective Mental Attitudes, Belief, Desire, Intention (BDI) Plans as Attitude Social Attitudes, Commun. Acts, Commitments Teamwork, etc. Neatness Scruffiness Time '86 The MAS Line '90 '96

  10. Teamwork is (Bratman 1992): • Mutual Commitment to Joint Activity • Agreement on the joint activity • Cannot abandon activity without involving teammates • Mutual support • Must be active in helping teammate activity • Mutual Responsiveness • Take over tasks from teammates if necessary

  11. Teamwork Theories • SharedPlans (Sidner&Grosz1986, Grosz&Kraus 1996) • Teammates agree on SharedPlan • Plan it together, execute it together • Specifies conditions for assistance, monitoring • Joint Intentions Framework (Cohen&Levesque) • Teammates agree on intentions • Teammates agree on selecting/deselecting goals • i.e., goal unachievable, achieved, irrelevant

  12. What’s Teamwork? The Famous Convoy Example: • Two agents Alice and Bob • Bob does not know how to get home • Bob knows Alice knows how to get home • Bob knows Alice lives near Bob • We have two agents, with matching goals • Both want to get to (approximately) same place • If Bob follows Alice, is that teamwork?

  13. Convoy Example Cont’d • Imagine Bob following Alice • Case 1: No teamwork • Bob follows Alice without talking to her first • Case 2: Teamwork • Bob asked Alice to lead him home and her agreeing

  14. Case 1: No teamwork • What happens if Alice goes home as planned? • What happens if Bob’s car breaks down? • What happens if Alice decides to change her mind?

  15. Case 2: Teamwork • Cars are in a convoy (a team) • If Bob stops, Alice should stop or … • Alice should use lots of signals • Alice drives slowly, looks in mirror a lot • ….

  16. Joint Intentions Key Ideas • Mutual belief (MB) in the joint intention • Mutual in goal • Joint execution until MB in goal termination • Cannot abandon teamwork when privately believe its over • Termination: • Goal achieved • Goal unachievable • Goal irrelevant

  17. Intuition and example Team-members work towards the joint goal • If they privately believe it should be terminated • achieved/unachievable/irrelevant • Then they are responsible for their belief mutual Consider: • If Bob decides got home, no need to follow Alice • If Alice changes her mind about where to go • If Bob’s car breaks down • …

  18. Additional Theoretical Thoughts • Teamwork is not coordination: • Convoy looks just like traffic when everything OK • Chess is coordinated, but not teamwork • Tracking involves one-sided coordination, for example • Teamwork is not necessarily rational • May not be rational to “waste” cycles on informing others • There is only little work addressing this problem

  19. שאלות?

  20. An Historical Perspective on Teamwork:From a Single Agent to Multiple Agents Subjective Reactive-Plans, Architectures Behavior-Based Architectures Social Behaviors, Reacting to others, Behavioral Roles Neatness Scruffiness Time '86 The MAS Line '90 '96

  21. Behavior-Based Teamwork • Parker: Distributed, fault-tolerant, architecture • Closest to explicit teamwork among roboticists • Tambe: STEAM teamwork engine • Explicit teamwork, virtual robots • Mataric: • Behavior combinations create different spatial group behavior • e.g., foraging, flocking, follow-the-leader • Balch: Behavior-based formation maintenance • Kuniyoshi et al.: Observation-based cooperation • Many more, inc. explosive # of ad-hoc techniques

  22. ALLIANCE (Parker 1998) Fault-tolerant robot team control: • Robots carry out team sub-tasks • Each robot uses behavior-based control • Runs ALLIANCE processes in addition • Robots communicate as part of ALLIANCE • Heterogeneous robots (but all run ALLIANCE) • Covers many kinds of failures: • Individual action, sensing • Communications

  23. What ALLIANCE uses? • Each robot has several Behavior-Sets • Behavior-Set: collection of behaviors for a particular task • Behaviors within a set may inhibit or activate each other • Sets are (de)selected based on Motivational Behaviors: • Triggers integrate perceptions, communications • Also internal-state motivations (explained below) • Only one behavior set active at a time • Robots communicate to each other what they are doing

  24. Motivations • How do robots select what task to do? • Task == behavior-set • This is a social choice! • All tasks need to get done • If robots fail, others should step in • Motivation: A numerical internal-state variable • Value changes based on processing of sensors, comm.

  25. An Elegant Solution • Robots keep track of their own progress • Robots communicate to each other what they are doing • Each robot knows what tasks its peers are doing Fault tolerance achieved through two motivations: • Impatience with others' performance of task • Value increases when peer not making progress • Acquiescence: Impatience with own lack of progress • Value increases when I am not making progress

  26. An Elegant Solution (Cont'd) • If impatience with task T too big • If another robot takes-over T, reset impatience • If no other robot does T, try to take-over T • If acquiescence with task T too big • Then robot abandons own execution of T Assumptions: • Robots can monitor their own actions, and those of others • Robots do not lie, and are not intentionally adversarial • Taking over roles can be done smoothly

  27. ALLIANCE Example • A joint paper to be written by V, B, and K • 5 sections: Intro, Background, Method, Results, Discussion • Initially, V picks Intro, B picks Background, K picks Results • All have similar thresholds, update each other via email every 15 minutes • What will happen if (different types of failures): • After much work, V's dog eats his copy of the Intro. • B finishes Background ahead of schedule • K's email server stops working • B's loses outgoing email to V, but not to K?

  28. What you should know now • How single-agent people (used to) view the world • Behavior-based control vs. planning/monitoring vs. BDI • How they view the problem of constructing a team • We did not discuss planning/monitoring • Because it is an open area • Some of what the theoreticians found • Teamwork involves responsibility towards others' state • Teamwork involves more than just “strong agreement” • ALLIANCE: An elegant behavior-based approach to teams Opportunity Alert!

  29. שאלות?

  30. Teamwork in Multi-Agent Systems • GRATE*: Teamwork in Industry (Jennings 1995) • Uses reliable communications • “Naïve” team formation using acquaintance models • STEAM: Teamwork in VR, Internet (Tambe 1997) • Re-planning and team repair • Selective communications

  31. An Historical Perspective on Teamwork:From a Single Agent to Multiple Agents Subjective Reactive-Plans, Architectures Mental Attitudes, Belief, Desire, Intention (BDI) Plans as Attitude Integrating Planning, Execution, Monitoring, Re-Planning, Architectures Behavior-Based Architectures Social Attitudes, Commun. Acts, Commitments Social Behaviors, Reacting to Others Social Planning, Reasoning about Roles Teamwork, etc. STEAM, GRATE* Neatness Scruffiness Time '86 '90 '96

  32. GRATE* Nick Jennings, 1995. In: Artificial Intelligence

  33. GRATE* Problem Settings • Real-world industrial applications very complex • A centralized control system is infeasible • Problem complexity is practically unmanageable • Distributed control provides solutions • Divide&Conquer: Each sub-problem is reduced complexity • A natural fit to a distributed-components problem • Allows re-use of existing components

  34. There’s always a “But…” Distributed AI is appealing, BUT…. • No principled way to build distributed systems • Unclear when to communicate, about what • Lack of coherent, predictable global behavior • Brittleness in dynamic, complex domains • Explosive number of possible interactions • Cannot predict all of them -- so system fails

  35. Brittleness: Built on real-world experience • New info is available to some, not all agents • Agents abandon task, while others still working • Inter-agent communication difficult to construct • Agents wait too long, interruptions cause failures • Agents actions causing false-readings for other agents

  36. Example: Power Transportation Networks • Three agents: • AAA and BAI perform diagnosis together • CSI performs monitoring • CSI detects problems, AAA and BAI do diagnosis • Initially: AAA and BAI have own monitoring • Maint. operations would then cause false alarms • Each had different monitoring expertise • Development of CSI alleviated this problem

  37. Example Cont’d Cooperation between CSI, AAA, and BAI was brittle • CSI detected fault, and then ruled it out • But “forgot” to let AAA and BAI know • AAA realized can’t diagnose, let BAI/CSI continue working • CSI detected more information about faults, • But did not send it to BAI,AAA • Interruptions cause BAI and AAA to wait for each other • Deadlock in communications

  38. Jennings’ Analysis of the Situation • Often no explicit representation of cooperation: • Agents cooperate for individual reasons • Don’t know other agents exist, or affected • Rare explicit cooperation is in surface rules • Describe “social norms”, with no deep knowledge • e.g., “If A asked for X, and I promised to deliver, then inform A if I cannot deliver” • Like subset of compiled knowledge: Insufficient by itself • From expert systems, known to be limited and brittle

  39. Joint Responsibility Model • Built on Joint-Intentions • Adds the idea of a Joint-Recipe (a Plan) • Team-members commit to execution until: • Recipe goal achieved/unachievable, OR • Recipe step failed/undoable • Agent communicate upon plan termination • Agents can form the team dynamically (later)

  40. GRATE*: A Joint-Responsibility Implementation • Each agent runs GRATE* processes • remember ALLIANCE? • We assume communications reliable, known delays • MB approximated through simple agreement • Everyone knows that everyone knows P • Also, global time is kept synchronized

  41. GRATE*: Structure • GRATE* task-dependent knowledge • What tasks are there, how do I do them • GRATE* cooperation layer • Controls task scheduling based on coordination • Feeds on information from individual control • Sends and receives communications • Uses acquaintance models, J.R. implementation

  42. Example • Suppose AAA receives diagnosis information • Task knowledge says: Do task T1, then T2 • AAA starts working on T1, • Discovers cannot do it • Cooperation layer jumps in, says: Inform BAI,CSI • OR, AAA starts working on T1, • Finishes it successfully, starts doing T2 • Cooperation layers jumps in, says: Inform BAI/CSI

  43. GRATE* Team Formation: Phase 1 • When agent recognizes need for attaining G • Determine best recipe R for achieving G • Determine appropriately “skilled” agents • Use acquaintance models • Contact these agents with a CFP • Ask for their commitment proposals • Form a set of possible agents for team

  44. GRATE* Team Formation Phase 2 Evaluate commitment proposals: • Select minimal # of agents that can execute actions in R • Determine execution time t of each action • Get commitment proposal from each agent for t • Each agent agrees or counters • Somewhat similar to contract-net bidding, BUT: • No backtracking: counter-proposals always accepted

  45. GRATE* Recipe execution • Each agent starts carrying out its role in R • May wait for information from others, or work in parallel • GRATE* cooperation layers coordinates execution • In case of contingencies, J.R. jumps in to handle execution • Likewise for normative behavior • This relies on task knowledge • to evaluate progress, • unachievability, etc.

  46. A GRATE* Team Recipe R Agent 2/Action C Agent 3/Action D Agent 1/Action A Agent 1/Action B Organizational link Dataflow/execution link GRATE* knows this … and therefore coordinates this

  47. STEAM: A Shell for Teamwork Milind Tambe, JAIR, 1997

  48. STEAM at a glance: • Motivation similar to GRATE*: Robustness • Interesting: Completely different environments • Uses Joint-Intentions, but also SharedPlans • Adds significantly to the practice of teamwork

  49. STEAM Novelties • Sub-teams, individual roles clearly defined • Selective communications • Comm. not assumed reliable • Explicit mutual beliefs • Re-planning and team repair • Collaborative team-formation • Demonstrated re-use across several domains!

More Related