1 / 41

Intelligent Agents Lecture # 2&3

Intelligent Agents Lecture # 2&3. Objectives. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types. Agents.

Download Presentation

Intelligent Agents Lecture # 2&3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent AgentsLecture # 2&3

  2. Objectives • Agents and environments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types

  3. Agents • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators • Human agent: • eyes, ears, and other organs for sensors; • legs, mouth, and other body parts for actuators • Robotic agent: • cameras and infrared range finders for sensors; • various motors for actuators • Software agents or softbots that have some functions as sensors and some functions as actuators. • Askjeeves.com is an example of a softbot.

  4. Glossary • Percept – agents perceptual inputs • Percept sequence – History of everything the agent has perceived • Agent function – Describes agent’s behaviour – maps any percept to an action • Agent program – Implements agent function

  5. Agents and environments • The agent function maps from percept histories to actions: [f: P*  A] • The agent program runs on the physical architecture to produce f agent = architecture + program

  6. Example: Vacuum-cleaner Agent • Environment: square A and B • Percepts: [location and content] e.g. [A, Dirty] • Actions: left, right, suck, and no-op • A simple agent function may be “if the current square is dirty, then suck or move to other square..”

  7. Example: Vacuum-cleaner Agent • Agent Program !!! function REFLEX-VACUUM-AGENT( [location,status]) returns action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

  8. Intelligent Agents • The fundamental faculties of intelligence are • Acting • Sensing • Understanding, reasoning, learning • An Intelligent Agent must sense, must act, must be autonomous (to some extent). • It also • must be rational. • AI is about building rational agents.

  9. Rational Agent • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. • What is the right thing? – Causes the agent to be most successful – Rationality is not the same as Perfection – Rationality maximizes “Expected Performance” – Perfection maximizes “Actual Performance” • How to evaluate agent’s success? • Performance Measure evaluates How. • Performance measure – is a criteria to measure an agent’s behavior e.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc • Performance measure according to what is wanted in the environment instead of how the agents should behave.

  10. Rationality – Cont’d • What is rational - depends on four things: – The performance measure – The agent’s prior knowledge of the environment – The actions the agent can perform – The agent’s percept sequence to date • A rational agent is: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and what ever built in knowledge the agent has.

  11. Rational Agents – Cont’d • Rationality is distinct from omniscience (all-knowing with infinite knowledge) • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration – an important part of rationality) • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) – a rational agent should be autonomous ….! Rational ⇒ exploration, learning, autonomy

  12. Is our vacuum-cleaner agent rational? Keeping in view four aspects to measure rationality: • The performance measure awards one point for each clean square at each time step over 10000 time steps. • The geography of the environment is known. - Clean squares stay clean and sucking cleans the current square. • Only actions are left, right, suck, and NoOp • The agent correctly perceives its location and whether that location contains dirt. What’s your Answer -- Yes/No

  13. Building Rational AgentsPEAS Description to Specify Task Environments • To design a rational agent we need to specify a task environment a problem specification for which the agent is a solution • PEAS: to specify a task environment • P:Performance Measure • E: Environment • A: Actuators • S: Sensors

  14. PEAS: Specifying an automatedtaxi driver • Performance measure: ? • Environment: ? • Actuators: ? • Sensors: ?

  15. PEAS: Specifying an automatedtaxi driver • Performance measure: safety, speed, legal, comfortable, maximize profits • Environment: ? • Actuators: ? • Sensors: ?

  16. PEAS: Specifying an automatedtaxi driver • Performance measure: safe, fast, legal, comfortable, maximize profits • Environment: roads, other traffic, pedestrians, customers • Actuators: ? • Sensors: ?

  17. PEAS: Specifying an automatedtaxi driver • Performance measure: safe, fast, legal, comfortable, maximize profits • Environment: roads, other traffic, pedestrians, customers • Actuators: steering, accelerator, brake, signal, horn • Sensors: ?

  18. PEAS: Specifying an automatedtaxi driver • Performance measure: safe, fast, legal, comfortable, maximize profits • Environment: roads, other traffic, pedestrians, customers • Actuators: steering, accelerator, brake, signal, horn • Sensors: cameras, sonar, speedometer, GPS

  19. PEAS: Specifying a part picking robot • Performance measure: Percentage of parts in correct bins • Environment: Conveyor belt with parts, bins • Actuators: Jointed arm and hand • Sensors: Camera, joint angle sensors

  20. PEAS: Specifying an interactive English tutor • Performance measure: Maximize student's score on test • Environment: Set of students • Actuators: Screen display (exercises, suggestions, corrections) • Sensors: Keyboard

  21. Environment types/Properties of Task Environments • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • e.g a taxi agent doesn’t has sensor to see what other drivers are doing/thinking …. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) • Vacuum worlds is Deterministic while Taxi Driving is Stochastic – as one can exactly predict the behaviour of traffic

  22. Environment types • Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. • E.g. an agent sorting defective parts in an assembly line is episodic while a taxi driving agent or a chess playing agent are sequential …. • Static (vs. dynamic): The environment is unchanged while an agent is deliberating. • (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) • Taxi Driving is Dynamic, Crossword Puzzle solver is static…

  23. Environment types – cont’d • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. • e.g. chess game has finite number of states • Taxi Driving is continuous-state and continuous-time problem … • Single agent (vs. multiagent): An agent operating by itself in an environment. • An agent solving a crossword puzzle is in a single agent environment • Agent in chess playing is in two-agent environment

  24. Examples • The environment type largely determines the agent design • The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

  25. Agent types Four basic types in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents – All of these can be turned into learning agents

  26. Simple reflex agents • Information comes from sensors - percepts • Changes the agents current state of the world • These agents select actions on the basis of current percept • Triggers actions through the effectors • The condition-action rules allow the agent to make connection from percept to actions • if car-in-front-is-braking thenbrake • iflight-becomes-green thenmove-forward • ifintersection-has-stop-sign thenstop • If dirty then suck

  27. Simple reflex agents • Characteristics • Such agents have limited intelligence… • Efficient • No internal representation for reasoning, inference. • No strategic planning, learning. • Are not good for multiple, opposing, goals. • Works only if correct decision can be made on basis of current percept.

  28. Simple reflex agents • Function SIMPLE-REFLEX-AGENT(percept) returns an action static: rules, a set of condition-action rules state←INTERPRET-INPUT(percept) rule←RULE-MATCH(state, rule) action←RULE-ACTION[rule] return action Will only work if the environment is fully observable otherwise infinite loops may occur.

  29. Simple reflex agents

  30. Model based reflex agents • These agents keep track of part of world it can’t see ….. • To tackle partially observable environments. • To update its state the agent needs two kinds of knowledge: 1. how the world evolves independently from the agent; Ex: an overtaking car gets closer with time. 2. how the world is affected by the agent’s actions. Ex: if I turn left, what was to my right is now behind Me. • Thus a model based agent works as follows: • information comes from sensors - percepts • based on this, the agent changes the current state of the world • based on state of the world and knowledge (memory), it triggers actions through the effectors • E.g. • Know about other car location in overtaking scenario for taxi driving agent • When agent turns the steering clockwise, car turns right ….

  31. Model based reflex agents • Function REFLEX-AGENT-WITH-STATE(percept) returns an action static: rules, a set of condition-action rules state, a description of the current world state action, the most recent action. state←UPDATE-STATE(state, action, percept) rule←RULE-MATCH(state, rule) action←RULE-ACTION[rule] return action

  32. Model-based reflex agents

  33. Goal based agents • Current state of environments is not always enough …. • e.g at a road junction, it can turn left, right or go straight .. • Correct decision in such cases depends on where taxi is trying to get to …. • Major difference: future is taken into account • Combining goal information with the knowledge of its actions, the agent can choose those actions that will achieve the goal. • Goal-based Agents are much more flexible in responding to a changing environment; accepting different goals. • Such agents work as follows: • information comes from sensors - percepts • changes the agents current state of the world • based on state of the world and knowledge (memory) and goals/intentions, it chooses actions and does them through the effectors.

  34. Goal-based agents

  35. Utility based agents • Goals alone are not always enough to generate quality behaviours • eg different action sequences can take the taxi agent to destination (and achieving thereby the “goal”) – but some may be quicker, safer, economical etc .. • A general performance measure is required to compare different world states • A utility function maps a state (or sequence of states) to a real number to take rational decisions and to specify tradeoffs when: • goals are conflicting – like speed and safety • There are several goals and none of which can be achieved with certainty …

  36. Utility-based agents

  37. Learning agents • A learning agent can be divided into four conceptual components: • Learning Element • Responsible for making improvements in Performance element • uses feedback from Critic to determine how performance element should be modified to do better…. • Performance Element • Responsible for taking external actions • selecting actions based on percepts • Critic • Tells the learning element how well agent is doing w.r.t. to fixed performance standards… • Problem Generator • Responsible for suggesting actions that will lead to improved and informative experiences. ….

  38. For a taxi driver agent: • Performance elementconsists of collection of knowledge and procedures for selecting driving actions • The Critic observes the world and passes the information to performance element – e.g. reaction/response of other drivers when the agent takes quick left turn from top lane …!! • Learning element then can formulate a rule to mark a “bad action”… • Problem generatoridentifies certain areas of behaviour improvement and suggest experiments – trying brakes on different road conditions etc… • The Learning elementcan make changes in “knowledge” components – by observing pairs of successive states allow an agent to learn (learn from what happens when strong brake is applied on a wet road …)

  39. Learning agents

  40. Summary An agent is something that perceives and acts in an environment. The agent function specifies the action taken by the agent in response to any percept sequence. • The performance measure evaluates the behaviour of the agent in the environment. A rational agent acts to maximise the expected value of the performance measure. • Task environments can be fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and single-agent or multiagent

  41. Summary • Simplex reflex agents respond directly to percepts, whereas model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents act to achieve their goals, and utility-based agents try to maximize their own expected “happiness”. • All agents can improve their performance through learning

More Related