1 / 23

Artificial Intelligence (AI)

University of Science and Technology Faculty of Computer Science and Information Technology. Artificial Intelligence (AI). 4 th Year B.Sc : Information Technology Academic Year : 2017-2018 Instructor : Diaa Eldin Mustafa Ahmed. Intelligent Agents (IA)- (2/2).

Download Presentation

Artificial Intelligence (AI)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University of Science and Technology Faculty of Computer Science and Information Technology Artificial Intelligence (AI) 4th Year B.Sc : Information Technology Academic Year : 2017-2018 Instructor : Diaa Eldin Mustafa Ahmed Intelligent Agents (IA)- (2/2) AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  2. Types of agent programs • Russell & Norvig (2003) group agents into six classes based on their degree of perceived intelligence and capability: • Table-Driven Agents • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents. • Learning agents • Complexity • Efficiency • Adaptability • Level of sophistication AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  3. Table-Driven Agents • A trivial agent program: keeps track of the percept sequence and then uses it to index into a table of actions to decide what to do. • The designers must construct the table that contains the appropriate action for every possible percept sequence. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  4. Table-Driven Agents • Problems: • The table can become very large. • And it usually takes a very long time for the designer to specify it (or to learn it). • No autonomy. • Practically impossible. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  5. Simple Reflex Agents If (you see the car in front’s brake lights) thenapply the brakes Current State of Decision Process Table Driven Agent. Lookup table for entire history AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  6. Simple Reflex Agents • Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. • Environment is fully observable , deterministic ,static , episodic , discreteandsingle agent. • The agent function is based on the condition-action rule: • ifconditionthenaction.e.g. : • If (you see the car in front’s brake lights) thenapply the brakes • Agent simply takes in a percept , determines which action could be applied, and does that action. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  7. Simple Reflex AgentsAdvantages / Disadvantages • Advantages: • Easy to implement. • Uses much less memory than the table-driven agent. • Useful when a quick automated response needed (i.e. reflex action is needed) . • Disadvantages: • Simple reflex agents can only react on fully observable environment. • Partially observable environments get simple reflex agents into trouble. • Possible condition-action rules too big to generateand to store (Chess has about states, for example) • Vacuum cleaner robot with defectivelocation sensor • leads to infinite loops. . AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  8. Simple Reflex AgentsAdvantages / Disadvantages • Problems: • Table is too big to generate and store( e.g. Taxi Driver ) • Takes long time to build the table. • Looping :can’t make actions based on multi-level conditions . • Limited intelligence. • Having a model of the world does not always determine what to do(rationally). AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  9. A Simple Reflex Agent in Nature percepts (size, motion) RULES: (1) If small moving object, then activate SNAP (2) If large moving object, then activate AVOID and inhibit SNAP ELSE (not moving) then NOOP Action: SNAP or AVOID or NOOP needed for completeness AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  10. Simple Reflex AgentsVacuum-Cleaner World Agent • Environment:square A and B • Percepts: location and state of the • environment,e.g., [A,Dirty], [B,Clean] . • Actions:Left, Right, Suck, NoOp • Agent function:mapping from Percepts to Actions. • Agent Program: • procedure Reflex−Vacuum−Agent [location,status] returns an action • if status = Dirty then returnSuck • else if location = A then returnRight • else if location = B then returnLeft AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  11. Model-based Reflex Agents description of current world state Model the state of the world by:modeling how the world changes , how it’s actions change the world . •This can work even with partial information •It’s is unclear what to do without a clear goal AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  12. Model-based Reflex Agents • The knowledge about “how the world works” is called a model of the world. • An agent that uses such a model is called a model- based agent. • Encode "internal state" of the world to remember the past as contained in earlier percepts (memory of the past). • Needed because sensors do not usually give the entire state of the world at each input, so perception of the environment is captured over time. • "State" used to encode different "world states" that generate the same immediate percept. • Requires ability to represent change in the world; one possibility is to represent just the latest state, but then can't reason about hypothetical courses of action . AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  13. Model-based Reflex Agents • Upon getting a percept • Update the state (given the current state, the action you just did, and the observations). • Choose a rule to apply (whose conditions match the state). • Schedule the action associated with the chosen rule. • This agent can work even with partial information • It’s is unclear what to do without a clear goal. • Table lookup of condition-action pairs defining all possible condition-action rules • e.g : if (car – in – front – is –breaking) then (initiate breaking) • Little adaptive to changes in the environment , requires model information to be updated if changes occur. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  14. Model-based Reflex AgentsAlgorithm AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  15. Goal-based Agents AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  16. Goal-based Agents • With reflex agents, the goal is implicit in the condition-action rules. • With goal-based agents, we make the goal explicit. • This allows us to change the goal easily, and • Use search and planning in deciding the best action at any given time. • The heart of the goal based agent is the search function. • It returns a path from the current state to the goal--a plan. • The agent then returns the first action in the plan. • A simple enhancement would allow the agent to have multiple goals. • On the other hand, for the reflex-agent, we would have to • rewrite many condition-action rules. • The goal based agent's behavior can easily be changed . AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  17. Goal-based Agents • Knowing about the current state of the environment • is not always enough to decide what to do (e.g. decision • at a road junction). • The agent needs some sort of goal information that • describes situations that are desirable . • The agent program can combine this with information about the results of possible actions in order to choose actions that achieve the goal. • Usually requires search and planning. • Although goal-based agents appears less efficient, it is more flexible because the knowledge that supports its decision is represented explicitly and can be modified. • The reflex agent's rules must be changed for a new situation. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  18. Utility-based Agents. • . AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  19. Utility-based Agents • Goals alone are not really enough to generate high quality behavior in most environments – they just provide a binary distinction between happy and unhappy states. • A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved. • Happy – Utility (the quality of being useful). • A utility function maps a state onto a real number which describes the associated degree of happiness. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  20. Learning Agents Evaluates current World State Changes action rules “old agent” model world and decide with action to be taken Suggests explorations AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  21. Learning Agents • Learning Agent Components • 1. Learning Element • responsible for making improvements (on what ever aspect is being learned...) • 2. Performance Element • responsible for selecting external actions. In previous parts, this was the entire agent! • 3. Critic • gives feedback on how agent is going and determines how performance element should be modified to do better in the future • 4. Problem Generator • suggests actions for new and informative experiences. AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  22. Summary • Agents interact with environments through actuators and sensors. • The agent function describes what the agent does in all circumstances • The performancemeasure evaluates the environment sequence • A perfectly rational agent maximizes expected performance • Agent programs implement (some) agent functions • PEAS descriptions define task environments. • Environments are categorized along several dimensions:observable? deterministic? episodic? static? discrete? single-agent?. • Several basic agent architectures exist : reflex, reflex with state, goal-based, utility . AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

  23. Thank You End AI - (2017-2018) -Diaa Eldein Mustafa - Lecture (2) _Intelligent Agent (2/2)

More Related