1 / 21

Introduction to Artificial Intelligence LECTURE 2 : Intelligent Agents

Introduction to Artificial Intelligence LECTURE 2 : Intelligent Agents. What is an intelligent agent? Structure of intelligent agents Environments Examples. Intelligent agents: their environment and actions. Ideal rational agents.

thuy
Download Presentation

Introduction to Artificial Intelligence LECTURE 2 : Intelligent Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Artificial IntelligenceLECTURE 2: Intelligent Agents • What is an intelligent agent? • Structure of intelligent agents • Environments • Examples

  2. Intelligent agents: their environment and actions

  3. Ideal rational agents • For each possible percept sequence, an ideal rational agent should take the action that is expected to maximize its performance measure, based on evidence from the percept sequence and its built-in knowledge. • Key concept: mapping from perceptions to actions • Different architectures to realize the mapping

  4. Structure of intelligent agents • Agent program: a program that implements the mapping from percepts to actions • Architecture: the platform to run the program (note: not necessarily the hardware!) • Agent = architecture + program • Examples: • medical diagnosis - part-picking robot • satellite image analysis - interactive tutor • refinery controller - flight simulator

  5. Illustrative example: taxi driver

  6. Table-Driven Agents function Table-driven-agent(percept) returns action static: percepts, a sequence, initially empty table, indexed by percept sequences (given) append percept to the end of percepts action := LOOKUP(percepts, table) return action • Keeps a list of all percepts seen so far • Table too large • takes too long to build • might not be available

  7. Simple Reflex Agent (1)

  8. Simple Reflex Agent (2) function Simple-Reflex-Agent(percept) returns action static: rules, a set of condition-action rules state := Interpret-Input (percept) rule := Rule-Match(state, rule) action := Rule-Action[rule] return action • No memory, no planning

  9. Reflex Agents with States (1)

  10. Reflex Agents with States (2) function Reflex-Agent-State(percept) returns action static: rules, a set of condition-action rules state, a description of the current state state := Update-State (state, percept) rule := Rule-Match(state, rules) action := Rule-Action[rule] state := Update-State (state, action) return action • still no longer-term planning

  11. Goal-based Agents (1)

  12. Goal-based Agents (2) function Goal-Based-Agent(percept, goal) returns action static: rules, a set of condition-action rules state, a description of the current state state := Update-State (state, percept) rule := Plan-Best-Move(state, rules, goal) action := Rule-Action[rule] state := Update-State (state, action) return action • longer term planning, but what about cost?

  13. Utility-based Agents (1)

  14. Utility-based Agents (2) • Add utility evaluation: not only how close does the action take me to the goal, but also how useful it is for the agent • Note: both goal and utility-based agents can plan with constructs other than rules • Other aspects to be considered: • uncertainty in perceptions and actions • incomplete knowledge • environment characteristics

  15. Properties of environments What is the environment where the agent acts like? • Accessible to inaccesible: is the state of the world fully know at each step? • Deterministic to nondeterministic: how much is the next state determined by the current state? • Episodic to non-episodic: how much state memory? • Static to dynamic: how much independent change? Discrete to continuous: how clearly are the actions and percepts differentiated?

  16. Examples of environments • Chess: accessible, deterministic, nonepisodic, static, discrete. • Poker: inaccessible, nondeterministic, nonepisodic, static, discrete. • Satelite image analysis:accessible, deterministic nonepisodic, semi static, continuous • Taxi driving: all no!

  17. Environment Programs (1) • A program to run the individual agents and coordinate their actions -- like an operating system. • Control strategies: • sequential: each agent perceives and acts once • asynchronous: let the agents communicate • blackboard: post tasks and have agents pick them • Agents must not have access to the environment program state!

  18. Environment Programs (2) function Run-Environment (state, Update-Function, agents, Termination-Test, Performance-Function) repeat for each agent in agents do Percept[agent] := Get-Percept(agent, state) for each agent in agents do Action[agent] := Program[agent](Percept([agent]) states := Update-Function(actions, agents, state) scores := Performance-Function(scores, agents, states) until Termination-Test(states) return scores

  19. Environment Programs: Examples • Chess • two agents, take turns to move. • Electronic stock market bidding • many agents, asynchronous, blackboard-based. • Robot soccer playing • in the physical world, no environment program!

  20. Summary • Formulate problems in terms of agents, percepts, actions, states, goals, and environments • Different types of problems according to the above characteristics. • Key concepts: • generate and search the state space of problems • environment programs: control architectures • problem modelling is essential

  21. ?

More Related