610 likes | 625 Views
Intelligent Agents. Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types. Agents.
E N D
Outline • Agents and environments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types
Agents • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. • The agentfunction maps from percept histories to actions: f: P* A
Vacuum-cleaner world • Percepts: location and contents, e.g., [A,Dirty] • Actions: Left, Right, Suck • Function-table (table look-up agent)
Agent Execution Cycle • Do until T (end of simulation) • percept = see(ENV); • statet=next(statet-1,percept); • action = do(statet); • update(ENV, action);
Agency • Autonomous • Reactivity • Proactivity • Social ability
Reactivity • If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly • Example of fixed environment: compiler • The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic • A reactivesystem is one that • maintains an ongoing interaction with its environment, • responds to changes that occur in it.
Proactiveness • Reacting to an environment is easy (e.g., stimulus response rules) • But we generally want agents to do things for us • Hence goal directed behavior • Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative
Social Ability • The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account • Some goals can only be achieved with the cooperation of others • Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language.
Rational agents • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. • Performance measure: An objective criterion for success of an agent's behavior.
Rational agents • RationalAgent: For each possible percept sequence, a rational agent should select an action that is expected tomaximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. • The performance measure that defines the criterion of success. • The agent’s prior knowledge of the environment. • The actions that the agent can perform. • The agent’s percept sequence to date.
Rational agents • Rationality is distinct from omniscience (all-knowing with infinite knowledge) • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, planning) • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt).
Rational agents • Is a vacuum-agent rational? (Depends!) • Performance measure • amount of dirt cleaned up, • amount of time taken, • amount of electricity consumed, • amount of noise generated, etc.. • A priori • Geography of the environment is known (two squares, A is on left and B is on right, etc.). • Perception is trustable (no illusion). • Sucking cleans the current square (no dysfunctional). • Actions • Left, right, suck. • Current percept • [location, dirt] • Possibilities • Penalty to each movement. • The agent have memory (reasoning percept histories). • The geography of the environment is unknown. • Clean square can become dirty. • The agent cannot percept its current location. • Agent can learn from experience.
Specify the setting for intelligent agent design:PEAS Description • Performance measure • Environment • Actuators • Sensors
Specify the setting for intelligent agent design:PEAS Description • PEAS: Performance measure, Environment, Actuators, Sensors • Eg. Automated taxi driver (open-end research of Intelligent Transportation Systems): • Performance measure: Safe, fast, legal, comfortable trip, maximize profits. • Environment: Roads, other traffic, pedestrians, customers. • Actuators: Steering wheel, accelerator, brake, signal, horn. • Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard.
PEAS • Eg. Medical diagnosis system • Performance measure: Healthy patient, minimize costs, lawsuits. • Environment: Patient, hospital, staff • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals). • Sensors: Keyboard (entry of symptoms, findings, patient's answers).
PEAS • Eg. Interactive English tutor (exercise) • Performance measure: Maximize student's score on test. • Environment: Set of students. • Actuators: Screen display (exercises, suggestions, corrections). • Sensors: Keyboard.
Environment TypesFully Observable vs. Partially Observable • An fully observable environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state. • Most moderately complex environments (including, for example, the everyday physical world and the Internet) are partially observable. • The more accessible an environment is, the simpler it is to build agents to operate in it. • Do not maintain the internal state to keep track of the world.
Examples • Playing chess? Fully. • Medical diagnosis? Partially. • Taxi driving? Partially.
Environment TypesDeterministic vs. non-deterministic • A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action. • In deterministicenvironments, agents do not worry about uncertainty. • Non-deterministic environments present greater problems for the agent designer.
Examples • Playing chess? Deterministic. • Medical diagnosis? Stochastic. • Battle field? Stochastic. • E-shopping? Deterministic. • E-auction? Stochastic.
Environment TypesEpisodic vs. sequential • In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. • In episodic environment, agents do not think ahead.
Examples • Medical diagnosis? Sequential. • Web search? Episodic. • E-shopping? Episodic.
Environment TypesStatic vs. dynamic • A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. • Agents do not keep looking at ENV when making decisions. • A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control.
Examples • Playing crossword puzzle? Static. • Expert systems? Static. • Playing chess? Semi-dynamic. • Playing soccer? Dynamic. • Battle field? Dynamic.
Environment TypesDiscrete vs. continuous • An environment is discrete if there are a fixed, finite number of actions and percepts in it. • Continuous environments have a certain level of mismatch with computer systems
Examples • Playing chess? Discrete. • Taxi driving? Continuous.
Agent types • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents Generality
Simple reflex agents ENV Percepts Inference Engine Reflex rules Actions Use condition-action rules to map the agent’s perceptions directly to action. Making decisions only with inputs.
Model-based reflex agents Update World Model ENV World Model Percepts Decision Rules Actions Have an internal model (state) of the external environment.
Goal-based agents Update World Model Percepts ENV World Model Goals Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Select Methods/ Actions
Utility-based agents Update World Model Percepts ENV World Model Goals Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Utility Select Methods/ Actions
Task of Software Agents • Interacting with human users • Personal assistants (email processing) • Information/Product search • Sales • Chat room host • Computer generated characters in games • Interacting with other agents • Facilitators. • Brokers.
Intelligent Behavior of Agents • Learning about users • Learning about information sources • Learning about categorizing information • Learning about similarity • Constraint satisfaction algorithms • Reasoning using domain-specific knowledge • Planning
Technologies of Software Agents • Machine learning • Information retrieval • Agent communication • Agent coordination • Agent negotiation • Natural language understanding • Distributed objects
Multi-Agent Systems What are MAS Objections to MAS Agents and objects Agents and expert systems Agent communication languages Application areas
MultiAgent Systems: A Definition • A multiagent system is one that consists of a number of agents, which have different goals and interactwith one-another • To successfully interact, they will require the ability to cooperate, compete, and negotiate with each other, much as people do
MultiAgent Systems: A Definition • Two key problems: • How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them? (agent design) • How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals? (society design)
Multi-Agent Systems • It addresses questions such as: • How can cooperation emerge in societies of self-interested agents? • What kinds of communication languages can agents use? • How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement? • How can autonomous agents coordinate their activities so as to cooperatively achieve goals? • These questions are all addressed in part by other disciplines (notably economics and social sciences). • Agents are computational, information processing entities.
Objections to MAS • Isn’t it all just Distributed/Concurrent Systems?There is much to learn from this community, but: • Agents are assumed to be autonomous, capable of making independent decision – so they need mechanisms to synchronize and coordinate their activities at run time.
Multi-Agent Execution Cycle • Do until T (end of simulation) • percept = see(ENV); • Syn; • statet=next(statet-1,percept); • action = do(statet); • update(ENV, action); • Syn;
Objections to MAS • Isn’t it all just AI? • We don’t need to solve all the problems of artificial intelligence (i.e., all the components of intelligence). • Classical AI ignored social aspects of agency. These are important parts of intelligent activity in real-world settings.
Objections to MAS • Isn’t it all just Economics/Game Theory?These fields also have a lot to teach us in multiagent systems (like rationality), but: • Insofar as game theory provides descriptive concepts, it doesn’t always tell us how to compute solutions; we’re concerned with computational, resource-bounded agents.
Objections to MAS • Isn’t it all just Social Science? • We can draw insights from the study of human societies, but again agents are computational, resource-bounded entities.
Agents and Objects • Are agents just objects by another name? • Object: • encapsulates some state • communicates via message passing • has methods, corresponding to operations that may be performed on this state
Agents and Objects • Main differences: • agents are autonomous:they decide for themselves whether or not to perform an action on request from another agent • agents are smart:capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior • agents are active:a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control
Objects do it for free… • agents do it because they want to • agents do it for money
Agents and Expert Systems • Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases) • Example: MYCIN knows about blood diseases in humans • It has a wealth of knowledge about blood diseases, in the form of rules • A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries