190 likes | 366 Views
CSE 471/598 Intelligent Agents. TIP We’re intelligent agents, aren’t we?. Spring 2004. Introduction. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators . Let’s look at Figure 2.1
E N D
CSE 471/598Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004
Introduction An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Let’s look at Figure 2.1 • Is that me? • An agent function maps percepts to actions CSE 471/598, H. Liu
All about Agents We will learn • How agents should act • Environments of agents • Types of agents • human, robot, software agents Why do we need to specify how agents should act? CSE 471/598, H. Liu
How Agents should act A rational agent is one that does the right thing. • What is “right”? The issue of performance measure, not a simple one • You often get what you ask for. • Be as objective as possible • Measure what one wants, not how the agent should behave • A related issue is when to measure it. CSE 471/598, H. Liu
A rational agent is not omniscient • Rationality is concerned with expected success given what has been perceived • A percept sequence contains everything that the agent has perceived so far • An ideal rational agent should do whatever action that is expected to maximize its performance measure • Perfection maximizes actual performance CSE 471/598, H. Liu
Four key components • What is rational depends on PEAS: • Performance measure • Environment • Actuators – generating actions • Sensors – receiving percepts • Some examples • A vacuum world with 2 locations (Fig 2.2) • Another example? Taxi driver (Fig 2.4) • Let’s look at its performance measure CSE 471/598, H. Liu
Definition of a rational agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. CSE 471/598, H. Liu
From percept sequences to actions • A mapping with possibly infinite entries • An ideal mapping describes an ideal agent • It’s not always necessary to have an explicit mapping in order to be ideal (e.g., sqrt (x)) • An agent should have some autonomy. • i.e., its behavior is determined by its own experience. • Autonomy can evolve with an agent’s experience and percept sequence - learning. CSE 471/598, H. Liu
External environments • Without exception, actions are done by the agent on the environment, which in turn provides percepts to the agents. • Environments affect the design of agents • Types of environments CSE 471/598, H. Liu
Types of Environments • Fully vs. partially observable • Deterministic vs. stochastic • E is deterministic but actions are not =>strategic • Episodic vs. sequential • Static vs. dynamic • E does not change, performance score does => semi-dynamic • Discrete vs. continuous • Single vs. multiple agents What is the most difficult environment? Let’s look at some examples in Fig 2.6 CSE 471/598, H. Liu
Design and Implementation of Agents • Design an agent function that maps the agent’s percepts to actions. • Or to realize how actions are selected/determined • Implement the agent function in an agent program which is realized in an agent architecture • Agent = Architecture+ Program • From Robots to Softbots CSE 471/598, H. Liu
Some examples of agents All agents have four elements (PEAS): 1. Performance 2. Environment 3. Actuators 4. Sensors • Fig 2.5 demos some agent types • We can see that there are many ways to define these components and it’s difficult to enumerate all possibilities CSE 471/598, H. Liu
Starting from the simplest • A look-up agent (Fig 2.7) • Generates actions based on percept sequences • Why not just look up? • An equivalent question is about the table size What else should we try? CSE 471/598, H. Liu
Types of agents • Simple reflex agents - respond based on the current percept, ignore the percept history. It cuts down a lot of possibilities. • An example (Fig 2.8) • A simple reflex agent (Figs 2.9,2.10) • Condition-action Rules • Innate reflexes vs.learned responses • What if the environment is not fully observable? CSE 471/598, H. Liu
Model-based reflex agents • They can handle partial observability • Knowledge about how the world works is called a model of the world • Simple reflex agents (Fig 2.9) • Maintain internal state to keep information of the changing environment and involve consideration of the future • Respond to a percept accordingly (Figs 2.11,2.12) CSE 471/598, H. Liu
Goal-based agents • They aim to achieve goals (F2.13) • Goal: desirable states, • Search for a sequence of actions, • Plan for solving sub-problems with special purposes • Goals alone are often not enough to generate high-quality behavior. Why? CSE 471/598, H. Liu
Utility-based agents • They aim to maximize their utilities (F2.14) • Utility: the quality of being useful, a single value function • Happy or not (a goal or not) vs. How happy when the goal is achieved • resolve conflicting goals (speed vs. safety) • evaluate with multiple uncertain qualities • search for trade-off facing multiple goals CSE 471/598, H. Liu
Learning agents • They can learn to improve (Fig 2.15) • Operate in initially unknown environments and become more competent • Four components: (1) problem generator, (2) performance element, (3) learner, (4) critic • The above types of agents can be found in the later chapters we will discuss. CSE 471/598, H. Liu
Summary • There are various types of agents who cannot live without external environment. • Efficiency and flexibility of different agents. Using ourselves as a model and our world as environment (Are we too ambitious?), you may: • Describe options for future consideration • Recommend a new type of agents CSE 471/598, H. Liu