1.85k likes | 2.15k Views
SAC 2002 Tutorial. Multi-Agent Systems. Svet Brainov University at Buffalo 210 Bell Hall Buffalo, NY 14260. Henry Hexmoor University of Arkansas Engineering Hall, Room 328 Fayetteville, AR 72701. Contents. Morning: Basics I. Introduction: from DAI to Multiagecy
E N D
SAC 2002 Tutorial Multi-Agent Systems Svet Brainov University at Buffalo 210 Bell Hall Buffalo, NY 14260 Henry Hexmoor University of Arkansas Engineering Hall, Room 328 Fayetteville, AR 72701
Contents Morning: Basics • I. Introduction: from DAI to Multiagecy 1. History and perspectives on multiagents (Henry) 2. Architectural theories (Henry) 3. Agent Oriented Software Engineering (Henry) 4. Mobility, reliability, and fault-tolerance (Henry) • II. Enabling Technologies 5. Game Theoretic and Decision Theoretic Agents(Svet) 6. Communication, security (Svet)
Contents: Continued Afternoon: Issues • III. Enabling Technologies 7. Social attitudes: Values, norms, obligations, dependence, control, responsibility, roles (Henry)8. Benevolence, Preference, Power, Trust (Svet) 9. Communication, Security(Svet) 10. Agent Adaptation and Learning (Svet) • IV. Closing 11. Trends and open questions (Svet) 12. Concluding Remarks (Svet)
Outline1. History and perspectives on multiagents2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility5. Autonomy and Teaming6. Game Theoretic and Decision Theoretic Agents7. Social attitudes: Values, norms, obligations, dependence, control, responsibility, roles8. Benevolence, Preference, Power, Trust. 9. Communication, Security10. Agent Adaptation and Learning11. Trends and Open questions 12. Concluding Remarks
Definitions • An agent is an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments. [Yoav Shoham, 1993] • An entity is a software agent if and only if it communicates correctly in an agent communication language. [Genesereth and Ketchpel, 1994] • Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. [Hayes-Roth, 1995]
Definitions 5. An agent is anything that can be viewed as (a)Perceiving its environment, and (b) Acting upon that environment [Russell and Norvig, 1995] 6. A computer system that is situated in some environment and is capable of autonomous action in its environment to meet its design objectives. [Wooldridge, 1999]
Agents: A working definition An agent is a computational system that interacts with one or more counterparts or real-world systems with the following key features to varying degrees: • Autonomy • Reactiveness • Pro-activeness • Social abilities e.g., autonomous robots, human assistants, service agents The need is for automation and distributed use of online resources
Test of Agenthood [Huhns and Singh, 1998] “A system of distinguished agents should substantially change semantically if a distinguished agent is added.”
Agents vs. Objects “Objects with attitude” [Bradshaw, 1997] • Agents are similar to objects since they are computational units that encapsulate a state and communicate via message passing • Agents differ from objects since they have a strong sense of autonomy and are active versus passive.
Agent Oriented Programming, Yoav Shoham AOP principles: • The state of an object in OO programming has no generic structure. The state of an agent has a “mentalistic” structure: it consists of mental components such as beliefs and commitments. 2. Messages in object-oriented programming are coded in an application-specific ad-hoc manner. A message in AOP is coded as a “speech act” according to a standard agent communication language that is application-independent.
Agent Oriented Programming Extends Peter Chen’s ER model, Gerd Wagner • Different entities may belong to different epistemic categories. There are agents, events, actions, commitments, claims, and objects. • We distinguish between physical and communicative actions/events. Actions create events, but not all events are created by actions. • Some of these modeling concepts are indexical, that is, they depend on the perspective chosen: in the perspective of a particular agent, actions of other agents are viewed as events, and commitments of other agents are viewed as claims against them.
Agent Oriented Programming Extends Peter Chen’s ER model, Gerd Wagner • In the internal perspective of an agent, a commitment refers to a specific action to be performed in due time, while a claim refers to a specific event that is created by an action of another agent, and has to occur in due time. • Communication is viewed as asynchronous point-to-point message passing. We take the expressions receiving a message and sending a message as synonyms of perceiving a communication event and performing a communication act. • There are six designated relationships in which specifically agents, but not objects, participate: only an agent perceives environment events, receives and sends messages, does physical actions, has Commitment to perform some action in due time, and has Claim that some action event will happen in due time.
Agent Oriented Programming Extends Peter Chen’s ER model, Gerd Wagner • An institutional agent consists of a certain number of (institutional, artificial and human) internal agents acting on behalf of it. An institutional agent can only perceive and act through its internal agents. • Within an institutional agent, each internal agent has certain rights and duties. • There are three kinds of duties: an internal agent may have the duty to full commitments of a certain type, the duty to monitor claims of a certain type, or the duty to react to events of a certain type on behalf of the organization. • A right refers to an action type such that the internal agent is permitted to perform actions of that type on behalf of the organization.
Agent Typology • Human agents: Person, Employee, Student, Nurse, or Patient • Artificial agents: owned and run by a legal entity • Institutional agents: a bank or a hospital • Software agents: Agents designed with software • Information agent: Data bases and the internet • Autonomous agents: Non-trivial independence • Interactive/Interface agents: Designed for interaction • Adaptive agents: Non-trivial ability for change • Mobile agents: code and logic mobility
Agent Typology • Collaborative/Coordinative agents: Non-trivial ability for coordination, autonomy, and sociability • Reactive agents: No internal state and shallow reasoning • Hybrid agents: a combination of deliberative and reactive components • Heterogenous agents: A system with various agent sub-components • Intelligent/smart agents: Reasoning and intentional notions • Wrapper agents: Facility for interaction with non-agents
Multi-agency A multi-agent system is a system that is made up of multiple agents with the following key features among agents to varying degrees of commonality and adaptation: • Social rationality • Normative patterns • System of Values e.g., HVAC, eCommerce, space missions, Soccer, Intelligent Home, “talk” monitor The motivation is coherence and distribution of resources.
Applications of Multiagent Systems • Electronic commerce: B2B, InfoFlow, eCRM • Network and system management agents: E.g., The telecommunications companies • Real-time monitoring and control of networks: ATM • Modeling and control of transportation systems: Delivery • Information retrieval: online search • Automatic meeting scheduling • Electronic entertainment: eDog
Applications of Multiagent Systems(cont.) • Decision and logistic support agents:Military and Utility Companies • Interest matching agents: Commercial sites like Amazon.com • User assistance agents: E.g., MS office assistant • Organizational structure agents: Supply-chain ops • Industrial manufacturing and production: manufacturing cells • Personal agents: emails • Investigation of complex social phenomena such as evolution of roles, norms, and organizational structures
Summary of Business Benefits • Modeling existing organizations and dynamics • Modeling and Engineering E-societies • New tools for distributed knowledge-ware
Three views of Multi-agency Constructivist: Agents are rational in the sense of Newell’s principle of individual rationality. They only perform goals which bring them a positive net benefit without regard to other agents. These are self-interested agents. Sociality: Agents are rational in the Jennings’ principle of social rationality. They perform actions whose joint benefit is greater than its joint loss.These are self-less, responsible agents. Reductionist: Agents which accept all goals they are capable of performing. These are benevolent agents.
Multi-agency: allied fields DAI • In DAI, a problem is being automatically decomposed among distributed nodes, whereas in multi-agents, each agent chooses to whether to participate. • Distributed planning is distributed and decentralized action selection whereas in multi-agents, agents keep their own copies a plan that might include others. MAS: (1) online social laws, (2) agents may adopt goals and adapt beyond any problem DPS: offline social laws CPS: (1) agents are a ‘team’, (2) agents ‘know’ the shared goal
Multi-agent assumptions and goals • Agents have their own intentions and the system has distributed intentionality • Agents model other agents mental states in their own decision making • Agent internals are of less central than agents interactions • Agents deliberate over their interactions • Emergence at the agent level and at the interaction level are desirable • The goals is to find some principles-for or principled ways to explore interactions
Origins of Multi-agent systems • Carl Hewitt’s Actor model, 1970 • Blackboard Systems: Hearsay (1975), BB1, GBB • Distributed Vehicle Monitoring System (DVMT, 1983) • Distributed AI • Distributed OS
MAS Orientations Computational Organization Theory Databases Sociology Formal AI Economics Distributed Problem Solving Cognitive Science Psychology Systems Theory Distributed Computing
Conferences • ICMAS 96, 98, 00, 02 • Autonomous Agents 96, 97, 98, 99, 00, 02 • CIA, ATAL, CEEMAS
Multi-agents in the large versus in the small • In the small: (Distributed AI) A handful of “smart” agents with emergence in the agents • In the large: 100+ “simple” agents with emergence in the group: Swarms (Bugs) http://www.swarm.org/
Henry Hexmoor’s Tree of Research Issues Autonomy Purposefulness Learning Action Selection Timeliness Habituation Commitment Skill formation Automaticity Perception Teamwork Architecture Cooperation Inference Social attitudes Values, Norms, Obligations Agents
Outline1. History and perspectives on multiagents2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility5. Autonomy and Teaming6. Game Theoretic and Decision Theoretic Agents7. Social attitudes: Values, norms, obligations, dependence, control, responsibility, roles8. Benevolence, Preference, Power, Trust. 9. Communication, Security10. Agent Adaptation and Learning11. Trends and Open questions 12. Concluding Remarks
Abstract Architecture action action actions states Environment
Architectures • Deduction/logic-based • Reactive • BDI • Layered (hybrid)
Abstract Architectures • An abstract model: <States, Action, S*A> • An abstract view • S = {s1, s2, …} – environment states • A = {a1, a2, …} – set of possible actions • This allows us to view an agent as a function action : S* A
Logic-Based Architectures • These agents have internal state • See and next functions and model decision making by a set of deduction rules for inference see : S P next : D x P D action : D A • Use logical deduction to try to prove the next action to take • Advantages • Simple, elegant, logical semantics • Disadvatages • Computational complexity • Representing the real world
Reactive Architectures • Reactive Architectures do not use • symbolic world model • symbolic reasoning • An example is Rod Brooks’s subsumption architecture • Advantages • Simplicity, computationally tractable, robust, elegance • Disadvantages • Modeling limitations, correctness, realism
{ off if temp = OK on otherwise action(s) = Reflexive Architectures: simplest type of reactive architecture • Reflexive agents decide what to do without regard to history – purely reflexive action : P A • Example - thermostat
Reflex agent without state (Russell and Norvig, 1995)
Goal-oriented agent: a more complex reactive agent (Russell and Norvig, 1995)
Utility-based agent: a complex reactive agent (Russell and Norvig, 1995)
BDI: a Formal Method • Belief: states, facts, knowledge, data • Desire: wish, goal, motivation (these might conflict) • Intention: a) select actions, b) performs actions, c) explain choices of action (no conflicts) • Commitment: persistence of intentions and trials • Know-how: having the procedural knowledge for carrying out a task
Belief-Desire-Intention Environment belief revision act sense Beliefs generate options filter Desires Intentions
Why is BDI a Formal Method? • BDI is typically specified in the language of modal logic with possible world semantics. • Possible worlds capture the various ways the world might develop. Since the formalism in [Wooldridge 2000] assumes at least a KD axiomatization for each of B, D, and I, each of the sets of possible worlds representing B, D and I must be consistent. • A KD45 logic with the following axioms: • K: BDI(a, fj, t) (BDI(a, f, t) BDI(a, j, t)) • D: BDI(a, f, t) not BDI(a, not f, t) • 4: B(a, f, t) B( B(a, f, t) ) • 5: (not B(a, f, t)) B( not B(a, f, t)) • K&D is the normal modal system
A simplified BDI agent algorithm 1. B = B0; 2. I := I0; 3. while true do 4. get next perceptr; 5. B := brf(B,r); // belief revision 6. D:=options(B,D,I,O); // determination of desires 7. I := filter(B, D, I,O); // determination of intentions 8.p:= plan(B, I); // plan generation 9. execute p 10. end while
Correspondences • Belief-Goal compatibility: Des Bel • Goal-Intention Compatibility: Int Des • Volitional Commitment: Int Do Do • Awareness of Goals and Intentions: Des BelDes Int BelInt
Layered Architectures • Layering is based on division of behaviors into automatic and controlled. • Layering might be Horizontal (I.e., I/O at each layer) or Vertical (I.e., I/O is dealt with by single layer) • Advantages are that these are popular and fairly intuitive modeling of behavior • Dis-advantages are that these are too complex and non-uniform representations
Outline1. History and perspectives on multiagents2. Agent Architecture3. Agent Oriented Software Engineering4. Mobility5. Autonomy and Teaming6. Game Theoretic and Decision Theoretic Agents7. Social attitudes: Values, norms, obligations, dependence, control, responsibility, roles8. Benevolence, Preference, Power, Trust. 9. Communication, Security10. Agent Adaptation and Learning11. Trends and Open questions 12. Concluding Remarks
Agent-Oriented Software Engineering • AOSE is an approach to developing software using agent-oriented abstractions that models high level interactions and relationships. • Agents are used to model run-time decisions about the nature and scope of interactions that are not known ahead of time.
Designing Agents:Recommendations from H. Van Dyke Parunak’s (1996) “Go to the Ant”: Engineering Principles from Natural Multi-Agent Systems, Annals of Operations Research, special issue on AI and Management Science. 1. Agents should correspond to things in the problem domain rather than to abstract functions. 2. Agents should be small in mass (a small fraction of the total system), time (able to forget), scope (avoiding global knowledge and action). 3. The agent community should be decentralized, without a single point of control or failure. 4. Agents should be neither homogeneous nor incompatible, but diverse. Randomness and repulsion are important tools for establishing and maintaining this diversity. 5. Agent communities should include a dissipative mechanism to whose flow they can orient themselves, thus leaking entropy away from the macro level at which they do useful work. 6. Agents should have ways of caching and sharing what they learn about their environment, whether at the level of the individual, the generational chain, or the overall community organization. 7. Agents should plan and execute concurrently rather than sequentially.
Organizations Human organizations are several agents, engaged in multiple goal-directed tasks, with distinct knowledge, culture, memories, history, and capabilities, and separate legal standing from that of individual agents Computational Organization Theory (COT) models information production and manipulation in organizations of human and computational agents
Management of Organizational Structure • Organizational constructs are modeled as entities in multiagent systems • Multiagent systems have built in mechanisms for flexibly forming, maintaining, and abandoning organizations • Multiagent systems can provide a variety of stable intermediary forms in rapid systems development