270 likes | 848 Views
Introduction to Artificial Intelligence LECTURE 2 : Intelligent Agents. What is an intelligent agent? Structure of intelligent agents Environments Examples. Intelligent agents: their environment and actions. Ideal rational agents.
E N D
Introduction to Artificial IntelligenceLECTURE 2: Intelligent Agents • What is an intelligent agent? • Structure of intelligent agents • Environments • Examples
Ideal rational agents • For each possible percept sequence, an ideal rational agent should take the action that is expected to maximize its performance measure, based on evidence from the percept sequence and its built-in knowledge. • Key concept: mapping from perceptions to actions • Different architectures to realize the mapping
Structure of intelligent agents • Agent program: a program that implements the mapping from percepts to actions • Architecture: the platform to run the program (note: not necessarily the hardware!) • Agent = architecture + program • Examples: • medical diagnosis - part-picking robot • satellite image analysis - interactive tutor • refinery controller - flight simulator
Table-Driven Agents function Table-driven-agent(percept) returns action static: percepts, a sequence, initially empty table, indexed by percept sequences (given) append percept to the end of percepts action := LOOKUP(percepts, table) return action • Keeps a list of all percepts seen so far • Table too large • takes too long to build • might not be available
Simple Reflex Agent (2) function Simple-Reflex-Agent(percept) returns action static: rules, a set of condition-action rules state := Interpret-Input (percept) rule := Rule-Match(state, rule) action := Rule-Action[rule] return action • No memory, no planning
Reflex Agents with States (2) function Reflex-Agent-State(percept) returns action static: rules, a set of condition-action rules state, a description of the current state state := Update-State (state, percept) rule := Rule-Match(state, rules) action := Rule-Action[rule] state := Update-State (state, action) return action • still no longer-term planning
Goal-based Agents (2) function Goal-Based-Agent(percept, goal) returns action static: rules, a set of condition-action rules state, a description of the current state state := Update-State (state, percept) rule := Plan-Best-Move(state, rules, goal) action := Rule-Action[rule] state := Update-State (state, action) return action • longer term planning, but what about cost?
Utility-based Agents (2) • Add utility evaluation: not only how close does the action take me to the goal, but also how useful it is for the agent • Note: both goal and utility-based agents can plan with constructs other than rules • Other aspects to be considered: • uncertainty in perceptions and actions • incomplete knowledge • environment characteristics
Properties of environments What is the environment where the agent acts like? • Accessible to inaccesible: is the state of the world fully know at each step? • Deterministic to nondeterministic: how much is the next state determined by the current state? • Episodic to non-episodic: how much state memory? • Static to dynamic: how much independent change? Discrete to continuous: how clearly are the actions and percepts differentiated?
Examples of environments • Chess: accessible, deterministic, nonepisodic, static, discrete. • Poker: inaccessible, nondeterministic, nonepisodic, static, discrete. • Satelite image analysis:accessible, deterministic nonepisodic, semi static, continuous • Taxi driving: all no!
Environment Programs (1) • A program to run the individual agents and coordinate their actions -- like an operating system. • Control strategies: • sequential: each agent perceives and acts once • asynchronous: let the agents communicate • blackboard: post tasks and have agents pick them • Agents must not have access to the environment program state!
Environment Programs (2) function Run-Environment (state, Update-Function, agents, Termination-Test, Performance-Function) repeat for each agent in agents do Percept[agent] := Get-Percept(agent, state) for each agent in agents do Action[agent] := Program[agent](Percept([agent]) states := Update-Function(actions, agents, state) scores := Performance-Function(scores, agents, states) until Termination-Test(states) return scores
Environment Programs: Examples • Chess • two agents, take turns to move. • Electronic stock market bidding • many agents, asynchronous, blackboard-based. • Robot soccer playing • in the physical world, no environment program!
Summary • Formulate problems in terms of agents, percepts, actions, states, goals, and environments • Different types of problems according to the above characteristics. • Key concepts: • generate and search the state space of problems • environment programs: control architectures • problem modelling is essential