900 likes | 993 Views
Combining agents into societies. Luís Moniz Pereira. Centro de Inteligência Artificial – CENTRIA Universidade Nova de Lisboa. DEIS, Università di Bologna, 22 Marzo 2004. Summary. Goal and motivation Overview of MDLP ( M ulti- D imensional LP )
E N D
Combining agents into societies Luís Moniz Pereira Centro de Inteligência Artificial – CENTRIA Universidade Nova de Lisboa DEIS, Università di Bologna, 22 Marzo 2004
Summary • Goal and motivation • Overview of MDLP (Multi-Dimensional LP) • Combining inter- and intra-agent societal viewpoints • An architecture for evolving Multi-Agent viewpoints • A logical framework for modelling societies • Future work and conclusion
Goal Explore the applicability of MDLP to represent multiple agents’ view of societal knowledge dynamics and evolution • The representation is the core of the agent architecture and system MINERVA. • It was designed with the aim of providing a common agent framework based on the strengths of Logic Programming.
Motivation - 1 • The notion of agency has claimed a major role in modern AI research • LP and non-monotonic reasoning are appropriate for rational agents: • Utmost efficiency is not always crucial • Clear specification and correctness are crucial • LP provides a general, encompassing, rigorous declarative and procedural framework for rational functionalities
Motivation - 2 • Till recently, LP could be seen as good for representing static non-contradictory knowledge. • In the agency paradigm we need to consider: • Ways of integrating knowledge from different sources evolving in time • Knowledge expressing state transitions • Knowledge about environment and societal evolution, and each agent’s own behavioural evolution • LP declaratively describes states well. • But LP must describe state transitions too.
MDLP overview • DLP synopsis • MDLP motivation • MDLP semantics • Multiple representational dimensions in a multi-agent system • Representation prevalence • Overview conclusions
Dynamic LP • DLP was introduced to express LP’s linear evolution in dynamic environments, via updates • DLP gives semantics to sequences of GLPs • Each program represents a distinct state of knowledge, where states may specify: • different time points, different hierarchical instances, different viewpoints, etc. • Different states may have mutually contradictory or overlapping information, and DLP determines the semantics for each state sequence
L2 L1 L1 L2 MDLP Motivating Example • Parliament issues law L1 at time t1 • A local authority issues law L2 at time t2 > t1 • Parliamentary laws override local laws, but not vice-versa: • More recent laws have precedence over older ones: • How to combine these two dimensions of knowledge precedence? • DLP with Multiple Dimensions (MDLP)
MDLP • In MDLP knowledge is given by a set of programs • Each program represents a different piece of updating knowledge assigned to a state • States are organized by a DAG (Directed Acyclic Graph) representing their precedence relation • MDLP determines the composite semantics at each state, according to the DAG paths • MDLP allows for combining knowledge updates that evolve along multiple dimensions
Generalized Logic Programs • To represent negative info in LP updates, we need LPs allowing not in heads • Programs are sets of generalized LP rules: A ¬ B1,…, Bk, not C1,…,not Cm not A ¬ B1,…, Bk, not C1,…,not Cm • The semantics is a generalization of SMs
MDLP - definition • Definition: A Multi-Dimensional Dynamic Logic Program, P, is a pair (PD, D) where: • D= (V, E) is an acyclic digraph • PD= { PV : v V} is a set of generalized logic programs indexed by the vertices of D
j1 j2 j3 s MDLP - semantics 1 • Definition: Let P=(PD,D) be a MDLP. An interpretation Ms is a stable model of the multi- dimensional update at state sV iff, where Ps= is Pi: Ms= least( [Ps – Reject(s, Ms)] Defaults (Ps, Ms) )
j1 j2 j3 s MDLP - semantics 2 Ms= least( [Ps – Reject(s, Ms)] Defaults (Ps, Ms) ) where: Reject(s, Ms) = {r Pi | r’ Pj , ijs, head(r)=not head(r’) Ms |=body(r’)} Defaults (Ps, Ms) = {not A | $r Ps: head(r)=A Ms |=body(r)}
MDLP for Agents • Flexibility, modularity, and compositionality of MDLP makes it suitable for representing the evolution of several agents’ combined knowledge How to encode, in a DAG, the relationships among every agent’s evolving knowledge along multiple dimensions ?
Hierarchy of agents Temporal evolution of one agent Two basic dimensions of a multi-agent system How to combine these dimensions into one DAG ?
Equal Role Representation • Assigns equal role to the two dimensions:
Equal Role - 2 • In legal reasoning: • Lex Superior : rules issued by a higher authority override those of a lower one • Lex Posterior : more recent rules override older ones • It potentiates contradiction: • There are many pairs of unrelated programs
Time Prevailing Representation • Assigns priority to the time dimension:
Time Prevailing - 2 • Useful in very dynamic situations, where competence is distributed, i.e. ¹ agents normally provide rules about ¹ literals • Drawback: • It requires all agents to be fully trusted, since all newer rules override older ones irrespective of their mutual hierarchical position
Hierarchy Prevailing Representation • Assigns priority to the hierarchy dimension:
Hierarchy Prevailing - 2 • Useful when some agents are untrustworthy • Drawback: • One has to consider the whole history of all higher ranked agents in order to accept/reject a rule from a lower ranked agent However, techniques are being developed to reduce the size of a MDLP (garbage collection).
A sub-agent Hierarchy Inter- and Intra- Agent Relationships • The above representations refer to a community of agents • But they can be used as well for relating the several sub-agents of an agent
Intra- and Inter- Agent Example • Prevailing hierarchy for inter-agents • Prevailing time for sub-agents
Current work of overview • A MINERVA agent: • Is based on a modular design • It has a common internal KB (a MDLP), concurrently manipulated by its specialized sub-agents • Every agent is composed of specialized sub-agents that execute special tasks, e.g. • reactivity • planning • scheduling • belief revision • goal management • learning • preference evaluation • strategy
MDLP overview conclusions • We’ve explored MDLP to combine knowledge from several agents and multiple dimensions • Depending on the situation, and relationships among agents, we’ve envisaged several classes of DAGs for their encoding • Based on this work, and on a language (LUPS) for specifying updates by means of transitions, we’ve launched into the design of an agent architecture MINERVA
Evolving multi-agent viewpoints – one more overview • Our agents • Framework references • Mutually updating agents • MDLP synopsis • Agent language: projects and updates • Agent knowledge state and agent cycle • Example • An implemented example architecture • Future work
Our agents We propose a LP approach to agents that can: • ReasonandReact to other agents • Update their own knowledge, reactions, and goals • Interact by updating the theory of another agent • Decide whether to accept an update depending on the requesting agent • Capture the representation of social evolution
Updating agents • Updating agent:a rational, reactive agent that can dynamically change its own knowledge and goals • makes observations • reciprocally updates other agents with goals and rules • thinks (rational) • selects and executes an action (reactive)
Multi-Dimensional Logic Programming • In MDLP knowledge is given by a set of programs. • Each program represents a different piece of updating knowledge assigned to a state. • States are organized by a DAG (Directed Acyclic Graph) representing their precedence relation. • MDLP determines the composite semantics at each state according to the DAG paths. • MDLP allows for combining knowledge updates that evolve along multiple dimensions.
New contribution • To extend the framework of MDLP with integrity constraints and active rules. • To incorporate the framework of MDLP into a multi-agent architecture. • To make the DAG of each agent updatable.
DAG A directed acyclic graph DAGis a pair D = (V, E) where V is a set of vertices and E is a set of directed edges.
Agent’s language Atomic formulae: Aobjective atoms not Adefault atoms i:Cprojects iC updates Formulae: generalized rules Li is an atom, an update or a negated update A ¬ L1 Ù...Ù Ln not A ¬ L1 Ù...Ù Ln Zj is a project integrity constraint false ¬ L1 Ù...Ù Ln Ù Z1 Ù...Ù Zm active rule L1 Ù...Ù Ln Z
Projects and Updates A projectj:Cdenotes the intention of some agent i of proposing the updating the theory of agent j with C. iCdenotes an update proposed by i of the current theory of some agent j with C. fredC wilma:C
Agents’ knowledge states • Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. • Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of parallel updates. • Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.
Agent’s language A projecti:Ccan take one of the forms: i:( A ¬ L1 Ù...Ù Ln ) i:( not A ¬ L1 Ù...Ù Ln ) i:( false ¬ L1 Ù...Ù Ln Ù Z1 Ù...Ù Zm ) i:( L1 Ù...Ù Ln Z ) i:( ?- L1 Ù...Ù Ln ) i:edge(u,v) i:not edge(u,v)
Initial theory of an agent A multi-dimensional abductive LP for an agent is a tuple: T = D, PD, A, RD - D = (V, E) is a DAG s.t. ´V (inspection point of ). - PD = {PV | vV} is a set of generalized LPs. - A is a set of atoms (abducibles). • RD = {RV | vV} is a set of set of active rules.
The agent’s cycle • Every agent can be thought of as an abductive LP equipped with a set of inputs represented as updates. • The abducibles are (names of) actions to be executed as well as explanations of observations made. • Updates can be used to solve the goals of the agent as well as to trigger new goals.
alfredo´ judge mother father alfredo girlfriend state 0 Happy story - example DAG of Alfredo inspection point of Alfredo The goal of Alfredo is to be happy
Happy story - example alfredo´ judge hasGirlfriend ¬ not happy father : (?-happy) not happy mother : (?-happy) getMarried Ù hasGirlfriend girlfriend : propose moveOut alfredo : rentApartment custody(judge,mother) alfredo : edge(father,mother) {moveOut, getMarried} mother father alfredo girlfriend abducibles state 0
Happy story - example alfredo´ judge hasGirlfriend ¬ not happy father : (?-happy) not happy mother : (?-happy) getMarried Ù hasGirlfriend girlfriend : propose moveOut alfredo : rentApartment custody(judge,mother) alfredo : edge(father,mother) {moveOut, getMarried} mother father alfredo girlfriend state 0
Agent theory The initialtheoryof an agent is a multi-dimensional abductive LP. Let an updating programbe a finite set of updates, and S be a set of natural numbers. We call the elements sS states. An agent at state s, written s , is a pair (T,U): - T is the initial theory of . - U={U1,…, Us} is a sequence of updating programs.
Multi-agent system A multi-agent systemM={1s ,…, ns }at states is a set of agents 1,…,n at state s. M characterizes a fixed society of evolving agents. The declarative semantics of M characterizes the relationship among the agents in M, and how the system evolves. The declarative semantics is stable models based.
Happy story - 1st scenario Suppose that at state 1, Alfredo receives from the mother: mother (happy ¬ moveOut) mother (false ¬ moveOut Ù not getMarried) mother (false¬nothappy) and from the father: father (happy ¬ moveOut) father (not happy ¬ getMarried)
Happy story - 1st scenario alfredo´ false¬nothappy happy ¬ moveOut false ¬ moveOut Ù not getMarried judge mother father happy ¬ moveOut not happy ¬ getMarried alfredo In this scenario, Alfredo cannot achieve his goal without producing a contradiction. Not being able to make a decision, Alfredo is not reactive at all. girlfriend state 1
Happy story - 2nd scenario Suppose that at state 1 Alfredo’s parents decide to get divorced, and the judge gives custody to the mother. judge custody(judge,mother)
Happy story - 2nd scenario alfredo´ custody(judge,mother) judge hasGirlfriend ¬ not happy father : (?-happy) not happy mother : (?-happy) getMarried Ù hasGirlfriend girlfriend : propose moveOut alfredo : rentApartment custody(judge,mother) alfredo : edge(father,mother) mother father alfredo girlfriend state 1
Happy story - 2nd scenario alfredo´ Note that the internal update produces a change in the DAG of Alfredo. judge mother father Suppose that when asked by Alfredo, the parents reply in the same way as in the 1st scenario. alfredo girlfriend state 2
Happy story - 2nd scenario alfredo´ false¬nothappy happy ¬ moveOut false ¬ moveOut Ù not getMarried judge mother father happy ¬ moveOut not happy ¬ getMarried alfredo Now, the advice of the mother prevails over and rejects that of his father. girlfriend state 2
Happy story - 2nd scenario alfredo´ Thus, Alfredo gets married, rents an apartment, moves out and lives happily ever after. judge hasGirlfriend ¬ not happy father : (?-happy) not happy mother : (?-happy) getMarried Ù hasGirlfriend girlfriend : propose moveOut alfredo : rentApartment custody(judge,mother) alfredo : edge(father,mother) mother father alfredo girlfriend state 2
Syntactical transformation The semantics of an agent at state s, s=(T,U), is established by a syntactical transformation that maps s into an abductive LP: s = P,A,R 1.s P´,A,R P´ is a normal LP, A and R are a set of abducibles and active rules. 2.Default negation can then be removed from P´ via the abdual transformation (Alferes et al. ICLP99, TCLP04): P´ P P is a definite LP.