650 likes | 817 Views
From Multiagent Systems to Multiagent Societies. Michael Berger. Based on: 1) “Multiagent Systems and Societies of Agents” / Michael N. Huhns and Larry M. Stephens In Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence (Chapter 2) / Gerhard W. Weiss
E N D
From Multiagent Systems to Multiagent Societies Michael Berger Based on: 1) “Multiagent Systems and Societies of Agents” /Michael N. Huhns and Larry M. Stephens In Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence (Chapter 2)/Gerhard W. Weiss 2) “Commitments and Conventions: The Foundation of Coordination in Multi-Agent Systems” /Nick R. Jennings
Overview • Agent and Environment • Communications • Interactions • Commitments and Conventions
Agent - Definition • An active object with the ability to perceive, reason and act. • Has explicitly represented knowledge and a mechanism for operating on or drawing inferences from its knowledge. • Has the ability to communicate.
No No No Yes Yes Yes Open Environment - Categories • Knowable (Accessible) • Predictable (Deterministic) • Controllable • Historical (non-Episodic) • Telelogical • Real-time (Dynamic)
Communications - Overview • Motivation • Meanings • Speech Acts • Message Types and Dialogue Roles • Communication Protocols • KQML • KIF • Ontologies
Motivation (I) ? • Coordination - the extent to which agents avoid extraneous activity. • Reducing resource contention • avoiding livelock / deadlock • maintaining safety conditions • Coherence - how well the system behaves as a unit. • Determining shared goals • Pooling knowledge and evidence
Motivation (II) • Coordination - “not making things worse”. • Coherence - “making things better”. • Communication enables the agents to coordinate their actions and behavior, resulting in systems that are more coherent.
Meanings (I) • Communication - consists of: • Syntax - how the symbols of communication are structured. • Semantics - what the symbols denote. • Pragmatics - how the symbols are interpreted. • Meaning = Semantics + Pragmatics
Meanings (II) • Dimensions of meaning: • Descriptive vs. Prescriptive • Speaker’s vs. Hearer’s vs. Society’s Perspective • Semantics vs. Pragmatics • Contextuality • Identity • Cardinality
Speech Acts • Speech act theory used as basis for analyzing human communication. • Theory views human natural language as actions. • Speech acts have three aspects: • Locution - the physical utterance by the speaker. • Illocution - the intended meaning of the utterance by the speaker. • Perlocution - the action that results from the locution. • “Performative” - Speech acts that have the property that “saying it makes it so” (e.g. promise, report, tell, request, demand).
Message Types and Dialogue Roles • Two basic message types: • Assertion • Query • Three dialogue roles: • Master (active) • Sends queries (questions), receives assertions (answers), sends assertions (fact determinations). • Slave (passive) • Receives queries (questions), sends assertions (answers), receives assertions (fact determinations). • Peer • Master + Slave
Communication Protocols • Communication can be: • binary (single sender, single receiver) • n-ary (single sender, many receivers) • Messages sent using communication protocols are specified by a data structure, that contains the following fields: • Sender • Receiver • Encoding / Decoding functions • Language of message • Message content
Communicating Agents (I) a is broken.
KQML • KQML - Knowledge Query and Manipulation Language. • Basic KQML performative defined by a structure that contains the following fields: • Sender • Receiver • Language • Ontology • Content • More advanced performatives. • Language used as wrapper for other languages - Domain independent! • Forwarding and nesting possible.
Communicating Agents (II) Je ne comprends pas Sender: Cowboy Receiver: Shadow Language: English Content: a is broken Languages: English, Spanish, Basque Languages: French
KIF • KIF - Knowledge Interchange Format. • Prefix version of first-order predicate calculus. • Example: or ((and (> ?a 6) (> b 5))) (< c 7) • Possible to encode knowledge about knowledge (second-order) and to describe procedures.
Communicating Agents (III) Sender: Cowboy Receiver: Shadow Language: KIF Ontology: Computers Content: broken(a) bad(message) Languages: English, Spanish, Basque, KIF Languages: French, KIF Ontologies: Fashion, Politics, Weather Ontologies: Computers, Politics, Sports
Ontologies • Ontology - specification of objects, concepts and relationships in an area of interest (domain). • Concepts represented in first-order logic as unary predicates. Relationships represented by n-ary predicates. • Note: predicates refer to classes of objects, not instances of objects. • except “instanceof” • All agents share the same ontology - i.e. all agents use and understand the same “vocabulary”!
Communicating Agents (IV) Sender: Cowboy Receiver: Shadow Language: KIF Ontology: Computers Content: broken(a) need_fixing(a) Computer Ontology: instanceof(a, disk) instanceof(X, disk) AND broken(X) ==> need_fixing(X) Languages: English, Spanish, Basque, KIF Languages: French, KIF Ontologies: Fashion, Politics, Weather, Computers Ontologies: Computers, Politics, Sports
Interactions - Overview • Motivation • Negotiation • Market Mechanisms • Contract Net • Truth Maintenance Systems • Blackboard Systems
Motivation • Communication is a necessary condition for coordination and coherence, but not a sufficient one. • It would help if agents could: • Determine shared goals • Avoid unnecessary conflicts • Pool knowledge and evidence
Negotiation • Negotiation - a process by which a joint decision is reached by two or more agents, each trying to reach an individual goal. • Main steps: • One of the agents communicates its initial position. • While no agreement is reached, each agent makes a proposal in its turn. These may include: • Concessions. • New alternatives. • Ends with agreement or disagreement. • Mechanisms for negotiation may be: • Environment-centered • Agent-centered
Negotiation Mechanisms:Environment-Centered • Environment designer. • “How can the rules of the environment be designed so that the agents will interact productively and fairly?” • A negotiation mechanism would ideally have the following attributes: • Efficiency • Stability • Simplicity • Distribution • Symmetry
Negotiation Mechanisms:Agent-Centered • Agent designer. • “Given an environment, what is the best strategy for my agent to follow?” • Large part of the negotiation mechanisms assume that agents are economically rational. • For example, a negotiation protocol that contains the following terms: • Deal • Utility • Negotiation set
Market Mechanisms (I) • Everything of interest to the agents described in terms of prices. • Two types of agents: • Consumers • Producers • Markets of goods are interconnected.
Market Mechanisms (II) • Big market will usually reach a competitive equilibrium: • Consumers bid to maximize utility, subject to their budget constraints. • Producers bid to maximize profits, subject to their technological capability. • Net demand is zero for all goods.
Contract Net (I) • Interaction protocol for cooperative problem solving. • Modeled on the contracting mechanism used by businesses. • For any assignment, agents are divided ad-hoc into managers and contractors.
Contract Net (II) 1 • Managers: • Announce a task that needs to be performed. • Receive and evaluate bids from potential contractors. • Award a contract to a suitable contractor. • Receive and synthesize results. • Contractors: • Receive task announcements. • Evaluate their own capability to respond. • Respond (decline / bid). • Perform the task if bid is accepted by manager. • Report task’s results. 5 6 9 2 3 4 7 8
Truth Maintenance System (I) • Truth Maintenance System (TMS) - ensures the integrity of an agent’s knowledge, and keeps the knowledge base: • Stable • Each datum that has a valid justification is believed. • Each datum that lacks a valid justification and which is not in initial belief set is disbelieved. • Well-founded • Permits no set of its beliefs to be mutually dependent. • Logically consistent • No datum is both believed and disbelieved. • Every datum is either believed or disbelieved. • No data and its negation are both believed or disbelieved.
TMS Graph U(OUT) T(EXTERNAL) _ Agent 2 + P(IN) T(INTERNAL) _ _ Q(OUT) R(IN) S(OUT) + _ Agent 1
Truth Maintenance System (II) • Every datum is labeled either: • IN (in initial belief set). • INTERNAL (“IN” because of local justification). • EXTERNAL (“IN” because another agent asserts it). • OUT (disbelieved). • When justification is added or removed, the TMS is invoked: • Some data unlabeled, including the newly justified datum and its consequences in all agents. • New Labeling introduced for all unlabeled data. • If any affected agent fails to label, backtrack occurs. • Principal of TMS changes: Affect as few agents as possible and as few beliefs as possible.
TMS - Example (I) U(OUT) T(EXTERNAL) _ Agent 2 + P(IN) T(INTERNAL) _ _ Q(OUT) R(IN) S(OUT) + _ Agent 1
TMS - Example (II) U(OUT) T(EXTERNAL) _ Agent 2 + P(IN) T(INTERNAL) + _ _ Q(OUT) R(IN) S(OUT) + _ Agent 1
TMS - Example (III) U(OUT) T(EXTERNAL) _ Agent 2 + P T(INTERNAL) + _ _ Q R(IN) S(OUT) + _ Agent 1
TMS - Example (IV) U T _ Agent 2 + P T + _ _ Q R(IN) S(OUT) + _ Agent 1
TMS - Example (V) U T _ Agent 2 + P T + _ _ Q R S + _ Agent 1
TMS - Example (VI) U(IN) T(OUT) _ Agent 2 + P(OUT) T(OUT) + _ _ Q(OUT) R(OUT) S(IN) + _ Agent 1
Blackboard Systems (I) • Akin to the following metaphor: • A group of specialists working together on solving a problem. • A common blackboard allows every specialist to report (“write down”) his sub-task results. • Every specialist may be assisted in his work by information reported on the blackboard. • Every specialist is called a “knowledge source” (KS).
Blackboard Systems (II) • Characteristics of blackboard systems: • Independence of expertise. • Diversity in problem-solving techniques. • Flexible representation of blackboard information. • Common interaction language. • Event-based activation. • Need for control. • Incremental solution generation.
Distributed Goal Search Model • Goals solution expressed as AND/OR graph (which is directed and a-cyclic). • High-level goals are root nodes. • Primitive goals are leaf nodes. • Graph also contains resources needed for solving primitive goals. • Dependencies may exist between different goals or between a goal and its resource. • Strong vs. weak • Uni-directional vs. bi-directional • Note that dependencies from resources to goals may be solved by adding more instances of the resource.
d11 …………………………………… d1j d2j+1 …………………………… d2z Strong dependencies Resources Weak dependendcies Distributed Goal Search Graph Agent1 Agent2 G1 G2 Goals G11 G12 ……….… G1k G1,2m G2p ……………… G2t G11,1 G11,2 G1m,1 G2m,2 G2p,1 G2p,2 G1m,1,1 G1m,1,2 G2p,1,1 G2p,1,2 G2p,2,2
Interactions among Agents for Distributed Goal Search • Defining the goal graph. • Assigning particular regions of the graphs to different agents. • Controlling decisions about which areas of the graph to explore. • Traversing the graph. • Ensuring that successful traversal of the graph is reported.
Commitment - Definition A pledge from one agent to another agent (or itself) to undertake a specified course of action.
Commitments • Practical reasoning agents employ intentions for choosing a course of action - a kind of “self-commitment”. • In computational problems, different agents commit themselves to solving different sub-goals of a larger goal. • Agents may inform other agents of the sub-goals to which they are self-committed. In stronger terms, they may commit to other agents about solving these sub-goals.
Motivation for Conventions (I) • Agents do not have complete knowledge of the goals and intentions of other agents. • Infeasible to have all agents re-contemplate about the goals of other agents in every step: • Limited computation power • Limited communication bandwidth • Infeasible to have one agent or database keep all information about all agents: • Bottleneck • Single point of failure
Motivation for Conventions (II) • If circumstances changed, an agent might be working sub-optimally until he asks about it. • Another agent solves a goal • Another agent commits itself to a goal • Another agent drops his commitment to a goal • Another agent discovers that a goal is no longer attainable • We still would like to keep a distributed system of agents...