320 likes | 590 Views
Agents. the AI metaphor. The agent model. agents include all aspects of AI in one object-oriented organizing model:. purpose. AGENT. act. perceive. e n v i r o n m e n t. Agents and decentralization. Mars Rover direct control agent . Purpose. Why do agents act? goals
E N D
Agents the AI metaphor
The agent model • agents include all aspects of AI in one object-oriented organizing model: purpose AGENT act perceive e n v i r o n m e n t D Goforth - COSC 4117, fall 2006
Agents and decentralization Mars Rover direct control agent D Goforth - COSC 4117, fall 2006
Purpose • Why do agents act? • goals • internal (state of agent’s structure, eg survive) • external (state of environment, eg clean up dirt) • How to measure success? • compare actual results to goals • R&N ‘performance measure’ D Goforth - COSC 4117, fall 2006
Performance measure • external to agent (like javadoc specification) • ideal that cannot always be achieved completely (unlike javadoc specs) Agent success (‘rationality’) is evaluated based on performance measure AND percepts, possible actions, experience (like an athlete) D Goforth - COSC 4117, fall 2006
Factors in rationality • performance measure – goals may be in conflict – can’t all be achieved • perceptions – agent may not have all the facts • actions available • experience – agent may not yet have accumulated all available relevant data D Goforth - COSC 4117, fall 2006
Agents are not just methods • actual outcome of actions are not known 100% • algorithms are not complete solutions – agents should be partly autonomous learn from experience • gather data about environment • respond better to same perceptions D Goforth - COSC 4117, fall 2006
The agent model • agents include all aspects of AI in one organizing model: purpose AGENT act perceive e n v i r o n m e n t D Goforth - COSC 4117, fall 2006
ExampleCASH REGISTER AS AGENT • Goals: get payment for items, update inventory, accumulate payments • Perceive bar code • Know price lists • Understand finding prices, names from bar code • Understand accumulating bill • Act to send price, code to accounting, send inventory change to db. • Act to display item name, price, and running total • Perceive signal for no-more-items • Act to request payment • Perceive payment • … D Goforth - COSC 4117, fall 2006
Environments • real or virtual • may contain other agents • factors relevant to the agent are called the state of the environment • perceptions give agent information about the state • actions of agent change the state D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) • fully or partly observable – perception of state of the environment game examples: chess, bridge, Myst D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) 2. actions are predictable – deterministic vs. stochastic vs. strategic game examples: chess, Monopoly, solitaire yogo peg game, solitaire card game D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) • episodic vs. sequential – actions are based on how many previous perceptions and actions? game examples: chess, paper-scissors-rock, bridge trick, bridge hand D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) • real-time vs event driven- (static vs dynamic) agent and environment are sequential or co-routines game examples: chess, tetris D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) • discrete vs. continuous environment, perception, action game examples: chess, tetris, driving simulator D Goforth - COSC 4117, fall 2006
Categorizing Environments(R&N p40-44) 6. number of agents – 1 or more competitive, cooperative, codependent, interfering, communicating (info separate from perceptions) game examples: solitaires, chess, bridge, futures, tetris, driving simulator(s), role playing games D Goforth - COSC 4117, fall 2006
Categorizing King’s Court: • fully / partly observable • deterministic / stochastic • sequential / episodic • static / dynamic • discrete / continuous • single- / multi-agent D Goforth - COSC 4117, fall 2006
Agent Structure • agent program is ‘episodic’ – receives percepts and produces actions (parameters and return values) BUT internal state of agent can evolve sequentially – agent may be in a different state after episode than before D Goforth - COSC 4117, fall 2006
Agent Structure • Table-Driven (p.45) • single perception look-up (HUGE table) • perception sequence look-up (HUGER table) • example game: tic-tac-toe • perfect solution but intractible D Goforth - COSC 4117, fall 2006
Table-driven agents (revised from R&N) KNOWLEDGE LOOK-UP TABLE Key value Percept1 action1 Percept2 action2 … D Goforth - COSC 4117, fall 2006
Agent Structure • Simple reflex (p.46) • based on current perception only • i.e., no instance variables in the agent object; no state • ‘condition-action’ rules (if then else algorithm) D Goforth - COSC 4117, fall 2006
Simple reflex agents – R&N D Goforth - COSC 4117, fall 2006
Agent Structure • Model-based reflex (p.48) • uses percepts to build internal model of environment - • internal state is ‘memory’ of environment • algorithm based on percepts and internal state D Goforth - COSC 4117, fall 2006
Model-based reflex agents – R&N D Goforth - COSC 4117, fall 2006
Agent Structure • Goal-based (p.49) • internal state representing environment PLUS • goals expressed in terms of environment and/or agent states • NOT REFLEX; ‘tries’ actions internally and tests results against goals D Goforth - COSC 4117, fall 2006
Agent Structure • Utility-based (p.50) • internal state representing environment PLUS • goals expressed in terms of environment and/or agent states PLUS • performance measure rationality • ‘tries’ actions internally and tests results against goals AND performance measure D Goforth - COSC 4117, fall 2006
Utility-based agents – R&N D Goforth - COSC 4117, fall 2006
Agent Structure • Learning(p.50) • extra component to evaluate performance and change program (if necessary) to act differently in same state • many kinds of learning agents D Goforth - COSC 4117, fall 2006
Agent Structures • Table driven • Simple reflex • Model-based reflex • Goal-based • Utility-based • Learning D Goforth - COSC 4117, fall 2006
Example CASH REGISTER AS AGENT: • Goals: get payment for items, update inventory, accumulate payments • Perceive bar code • Know price lists • Understand finding prices, names from bar code • Understand accumulating bill • Act to send price, code to accounting, send inventory change to db. • Act to display item name, price, and running total • Act to request payment • Perceive payment • … D Goforth - COSC 4117, fall 2006