380 likes | 495 Views
Do software agents know what they talk about?. Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005. Definition revisited. Autonomy (generally accepted) Learning (not necessarily, maybe undesirable ….
E N D
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005
Definition revisited • Autonomy (generally accepted) • Learning (not necessarily, maybe undesirable • … An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet ist design objectives. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Agent Action output Sensor input Environment Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Definition • An agent • Has impact on its environment • Has partial control • Actions may have undeterministic effects • The agent has a set of possible actions, which may make sense depending on environment parameters Agents and Ontology Patrick.DeCausmaecker@kahosl.be
The fundamental problem • The agent must decide which of its actions are best fit to meet its objectives. • An agent architecture is a software structure for a decision system that functions in an environment. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Example: a control system: • A thermostate works according to the rules • Distinguish environment, action, impact Too cold => heating on Temperature OK => heating off Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Example : control system • X Windows xbiff handle email • Xbiff lives in a software environment • It uses LINUX software functions uitvoeren to arrive at its information (ls to check mailbox) • It uses LINUX software functions to change its environment (adapt the icon on the desktop) • As an agent it is not more complicated than the thermostate. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Environments • Access • Deterministic or not • Static or dynamic • Discrete or continuous Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Access • The temperature at the north pole of Mars? • Uncertainty, incompleteness of information • But the agent must decide • Better access makes simpler agents Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Deterministic or not • Sometimes the result of an action is not deterministic. • This is caused by • Limited impact of the agent • Limited capabilities of the agent • The complexity of the environment • The agent must check the consequences of its actions Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Static/Dynamic • Is the agent the only actor? • E.g. Software systems, large civil constructions, visitors in an exhibition. • Most systems are dynamic • The agent must keep collecting data, the state may change during the action or the decision process • Synchronisation, co-ordination between processes and agents is necessary. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Discrete or continuous • Classify: • Chess, taxi driving, navigating, , word processing, understanding natural language • Which is more difficult? Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Interaction with environment • Originally: functional systems • Compilers • Given a precondition, they realise a postcondition • Top down design is possible f:I->O Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Interaction: reactivity • Most programs are reactive • They maintain a relationship with modules and environment, respond on signals • Can react fastly • React and think afterwards (or not) • Reactive agents take local decisions with a global impact Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Intelligent agents • Intelligence is • Responsivity • Proactivity • Social ability • E.g. proactivity: C-program • Constant environment • E.g. responsivity • The agent is in the middle, this is complicated Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Agenten and Objects • “Objects are actors. They respond in a human like way to messages…” • Agents are AUTONOMOUS • Objects implement methods that can be CALLED by other objects • Agents DECIDE what to do, in response to messages Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Objects do it for free Agents do it because they want it Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Agents and expertsystems • Vb: Mycin,… • Expertsystems are consultants, they do not act • They are in general not proactive • They have no social abilities Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Agents as intentional systems • Belief, Desire, Intention • First order: • Belief,… about objects • NOT about Belief… • Higher order: • May model its own beliefs, … or those of other agents • BDI Agents and Ontology Patrick.DeCausmaecker@kahosl.be
A simple example • A light switch is an agent that can allow current to pass or not. It will do so if it beliefs that we want the current to pass and not of it beliefs that we do not. We pass our intentionts by switching. • There are simpler models of a switch… Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Abstract architecture • Environment is a set of states: • E = {e,e’,…} • An agent has a set of actions • Ac= {,’,…} • A run is a sequence state-action-state-… • R=e0- 0 -> e1- 1 -> e2- 2 ->… - u -> eu Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Abstract architecture • Symbols • R is the set of runs • RAcis the set of runs ending in an action • REis the set of runs ending in a state • r,r’ are in R. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Abstract architecture • The state transformation: • : RAc->P(E) • An action may lead to a set of states • The result depends on the run • (r) may be empty Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Abstract architecture • Environment: • Env = <E,e0, > • E a set of states, e0 an initial state, state transformation • An agent is a function • Ag: RE -> Ac • Which is deterministic! • R(Ag, Env) is the set of all ended runs Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Abstract architecture • A sequence • (e0,0, e1, 1, e2, 2 …) • Is a run of agent Ag in Env=<E,e0, > iff • e0is the initial stae of Env • for u>0 • eu ((e0,0, … u-1 …)) • u = Ag((e0,0, … u-1, eu ) Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Perception • The action function can be split • Perception • Actionselection • We now call • see the function that allows the agent to observe • action the function modelling the decision process Agents and Ontology Patrick.DeCausmaecker@kahosl.be
see action Agent Action output Sensor input Environment Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Perception • We have • see : E -> Per • action : Per* -> Ac • action works on sequences of perceptions. • An agent is a pair: • Ag=<see, action> Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Perception: an example • Beliefs • x=‘The temperature is OK’ • y=‘Gherard Schröder is chanceler’ • Environment • E={e1= {x,y}, e2= {x,y}, e3= {x,y}, e4= {x,y}} • Thermostate? Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Perception • Equivalence of states: • e1 ~ e2 a.s.a. see(e1)=see(e2) • |~| = |E| for a strong agent • |~| = 1 for an agent with a weak perception Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Agents with a state • The past is taken into account through an internal state of the agent: • see: E -> Per • action: I->Ac • next: I x Per -> I • Action selection is • action(next(i,see(e))) • The new state is • i’=next(i,see(e)) • Environmental impact: • e’(r) Agents and Ontology Patrick.DeCausmaecker@kahosl.be
How to tell the agent what to do • Two approaches: benaderingen: • Utility • Predicates • Utitility is a performance measure for states • Predicates contain a specification of the states. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Utility • Let it purely work on states: • u:E->R • The fitness of an action is judged on • minimum of available u-values • Average of available u-values • … • Approach is local, agents become myopic Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Utility • Let it work on runs • u:R->R • Agents can look forward • E.g.: Tileworld (Pollack 1990) Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Utilities • May be defined probabilistically, by adding a probability to the state transformation. • A problem is computability, within specific time limits. In most cases the optimum cannot be found. One can use heuristics here. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Predicates • Utilities are not the most natural way to define a state. What does it mean that the temperature is ok? • Humans think in objectives. Those are statements, or predicates. Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Task environments • A pair <Env, > is called a task environment iff • Env is an environment and :R->{0,1} • is a predicate over the runs R • The set of runs satisfying the predicate is R • An agent Ag is successful iff • R(Ag,Env) = R(Ag,Env) or • r R(Ag,Env) : (r) • Alternatively: • r R(Ag,Env) : (r) Agents and Ontology Patrick.DeCausmaecker@kahosl.be
Task environments • One distinguishes • Achievement tasks • Aim at a certain condition on the environment • Maintenance tasks • Try to avoid a certain condition on the environment Agents and Ontology Patrick.DeCausmaecker@kahosl.be