420 likes | 693 Views
LOGICAL AGENTS. Tuğçe ÜSTÜNER Artificial Intelligence IES 503. ‘ In which we introduce a logic that is sufficent for building knowledge - based agents !’. Contents. Introduction Knowledge - Based Agents Syntax and Semantics Entailment Logical Agents for the Wumpus World Inference
E N D
LOGICAL AGENTS Tuğçe ÜSTÜNER ArtificialIntelligence IES 503 ‘Inwhichweintroduce a logicthat is sufficentfor buildingknowledge- basedagents!’
Contents • Introduction • Knowledge-BasedAgents • SyntaxandSemantics • Entailment • LogicalAgentsfortheWumpusWorld • Inference • PropositionalLogic • WumpusWorldSentences • LogicalEqivalence • ImportantEquivalence • ValidityandSatisfiability • Resolution • Normal Forms • ForwardandBackwardChaining • Conclusion
INTRODUCTION • Theconcept of thischapter is therepresentation of knowledgeandthereasoningprocessthatbringknowledgeto life. • Humans, it seems, know things and do reasoning. Knowledge and reasoning are alsoimportant for artificial agents because they enable successful behaviors that would be very hard toachieve. • The knowledge of problem-solving agents isvery specific and inflexible. • Logic will be the primary vehicle forthe representing knowledge.The knowledge of logical agents is always definitealthougheach proposition iseither true or false in the world.
KnowledgeBasedAgents • Humans can know “things” and “reason” • Representation: How are the things stored? • Reasoning: How is the knowledge used? • To solve a problem… • To generate more knowledge… • Knowledge and reasoning are important to artificial agents because they enable successful behaviors difficult to achieve otherwise • Useful in partially observable environments • Can benefit from knowledge in very general forms, combining and recombining information
KnowledgeBasedAgents • Central component of a Knowledge-Based Agent is a Knowledge-Base • A set of sentences in a formal language • Sentences are expressed using a knowledge representation language • Two generic functions: • TELL - add new sentences (facts) to the KB • “Tell it what it needs to know” • ASK - query what is known from the KB • “Ask what to do next”
KnowledgeBasedAgents • The agent must be able to: • Represent states and actions • Incorporate new percepts • Update internal representations of the world • Deduce hidden properties of the world • Deduce appropriate actions Domain-Independent Algorithms InfereneEngine Knowledge-Base Domain-Specific Content
KnowledgeBasedAgents • Declarative • You can build a knowledge-based agent simply by “TELLing” it what it needs to know • Procedural • Encode desired behaviors directly as program code • Minimizing therole of explicit representation and reasoning can result in a much more efficient system
SyntaxandSemantics • Logicsare formal languages forrepresenting information such thatconclusions can be drawn • Syntax defines the sentences in the language • Semantics define the "meaning" of sentences • Termis a logicalexpressionthatrefersto an object • Atomicsentenceis formedfrom a predicatesymbolfollowedby a parenthesizedlist of terms.
SyntaxandSemantics Example; -Syntax; • x+2 ≥ y is a sentence; • x2+y > {} is not a sentence -Semantics; • x+2 ≥ y is true iff the number x+2 is no less than the number y • x+2 ≥ y is true in a world where x = 7, y = 1 • x+2 ≥ y is false in a world where x = 0, y = 6
Entailment • Definition:Knowledge base (KB) entails sentence a(alpha) ifandonlyifa(alpha) is true in all worlds where KB is true • Notation: KB ╞ a (alpha) ‘Entailment is a relationship between sentences that is based on semantics’
Entailment • Example; The KB containing ‘the shirt is green’ and ‘the shirt is striped’entails ‘the shirt is green or the shirt is striped’. • Example; x+y=4 entails 4=x+y • Models:Models are formally structured worlds,with respect to which truth can be evaluated. • m is a model of a sentence a(alpha) if a(alpha) is true in m • M(a) is the set of all models of a(alpha) • KB ╞ a(alpha)if and only if M(KB) M(a) • Example; KB = The shirt is green and striped a(alpha) = The shirt is green
WumpusWorld • Performance Measure • Gold +1000, Death – 1000 • Step -1, Use arrow -10 • Environment • Square adjacent to the Wumpus are smelly • Squares adjacent to the pit are breezy • Glitter iff gold is in the same square • Shooting kills Wumpus if you are facing it • Shooting uses up the only arrow • Grabbing picks up the gold if in the same square • Releasing drops the gold in the same square • Actuators • Left turn, right turn, forward, grab, release, shoot • Sensors • Breeze, glitter, and smell
WumpusWorld • Characterization of Wumpus World • Observable • partial, only local perception • Deterministic • Yes, outcomes are specified • Episodic • No, sequential at the level of actions • Static • Yes, Wumpus and pits do not move • Discrete • Yes • Single Agent • Yes
WumpusWorld • KB=wumpus-worldrules+observations
WumpusWorld KB=wumpus-worldrules+observations
WumpusWorld KB=wumpus-worldrules+observations
Inference • KB ├iα = sentence α can be derived from KB by procedure i (i is an algorithm that derives α from KB ) • Soundness: i is sound if whenever KB ├iα, it is also true that KB╞ α • Completeness: i is complete if whenever KB╞ α, it is also true that KB ├iα
PropositionalLogic • PropositionalSymbols; A,B,P1,P2,ShirtisGreen areatomicsentences. • If S,S1,S2aresentences, then Propositionalmodels;each model specifiestrue/falsefor eachpropositionsymbol.
WumpusWorldSentences • PropositionalSymbols; Pi,jmeans; ‘there is a pit in [i,j]’ Bi,jmeans; ‘there is a breeze in [i,j]’ Sentences; ‘Pitscausebreezes in adjacentsquares A square is breezyifandonlyifthere is an adjacentpit
LogicalEquivalence • Twosentencesarelogicallyequivalent, denotedby; • Iftheyaretrue in thesamemodels;
ValidityandSatisfiability • A sentence is validif it is true in allmodels • A sentence is satisfiableif it is true in somemodels; • A sentence is unsatisfiableif it is true in no models
ValidityandSatisfiability Connectsvalidityandunsatisfiability is validifandonlyif is unsatisfiable Connectsinferenceandunsatisfiablity ifandonlyif is unsatisfiable
Resolution • Therearetwokinds of proofmethods. Theseareapplication of inferencerulesandmodel checking. • Application of inferencerules; legitimate (sound) generation of newsentencesfromold. Proof; a sequence of inference rule applications can use inference rules as operators in a standard search algorithm. Typically (in algorithms) require transformation of sentences into a normal form. -Model Checking; KB ├iα • truth table enumeration (always exponential in n) • backtracking & improved backtracking, • heuristic search in model space (sound but incomplete)
Normal Forms • Literal is an atomic sentence (propositional symbol), orthe negation of an atomic sentence • Clause a disjunction of literals • Conjunctive Normal Form (CNF):a conjunction of disjunctions of literals
ResolutionAlgorithm • In mathematical logicand resolution is a rule of inferenceleading to a refutationtheorem-provingtechnique for sentences in propositional logic . • In other words, iteratively applying the resolution rule in a suitable way allows for telling whether a propositional formulais satisfiable and for proving that a first-order formula is unsatisfiable. • This method may prove the satisfiability but not always, as it is the case for all methods for first-order logic .
Resolution • Example; • Wetness is highandweather is cloudy. • Ifweather is cloudy, it meansthat it willrain, • Ifthewetness is high,weather is hot. • Weather is not hot. CNF
ForwardandBackwardChaining • Forward chaining is one of the two main methods of reasoning when using inference rulesin artificial intelligence and can be described logically.Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining. • Forward chaining starts with the available data and uses inference rules to extract more data until a goalis reached. An inference engineusing forward chaining searches the inference rules until it finds one where the‘If clause’is known to be true. When such a rule is found, the engine can conclude, or infer ‘Thenclause’, resulting in the addition of new informationto its data. • Inference engines will iterate through this process until a goal is reached.
ForwardChaining • Suppose that the goal is to conclude the color of a pet named Fritz, • given that he croaks and eats flies, and that the rule base contains the following four rules: • If X croaks and eats flies - Then X is a frog • If X chirps and sings - Then X is a canary • If X is a frog - Then X is green • If X is a canary - Then X is yellow
ForwardChaining • Let us illustrate forward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts: • Fritz croaks • Fritz eats flies • Tweety eats flies • Tweety chirps • Tweety is yellow
ForwardChaining With forward reasoning, the computer can derive that Fritz is green in four steps: 1. Fritz croaks and Fritz eats flies Based on logic, the computer can derive: 2. Fritz croaks and eats flies Based on rule 1, the computer can derive: 3. Fritz is a frog Based on rule 3, the computer can derive: 4. Fritz is green.
BackwardChaining • The name "forward chaining" comes from the fact that the computer starts with the data and reasons its way to the answer, as opposed to backward chaining, which works the other way around. • In the derivation, the rules are used in the reverse order as compared to backward chaining. • The data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. • One of the advantages of forward-chaining over backward-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change
BackwardChaining • Example; • suppose that the goal is to conclude whether Tweety or Fritz is a frog, given information about each of them, and that the rule base contains the following four rules: • If X croaks and eats flies – Then X is a frog • If X chirps and sings – Then X is a canary • If X is a frog – Then X is green • If X is a canary – Then X is yellow • Let us illustrate backward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts: • Fritz croaks • Fritz eats flies • Tweety eats flies • Tweety chirps • Tweety is yellow
BackwardChaining • With backward reasoning, the computer can answer the question "Who is a frog?" in four steps: In its reasoning, the computer uses a placeholder 1. ? is a frog Based on rule 1, the computer can derive: 2. ? croaks and eats flies Based on logic, the computer can derive: 3. ? croaks and ? eats flies Based on the facts, the computer can derive: 4. Fritz croaks and Fritz eats flies • This derivation will cause the computer to produce Fritz as the answer to the question "Who is a frog?". • Computer has not used any knowledge about Tweety to compute that Fritz is a frog.
Forward&BackwardChaining • FC is data-driven, automatic, unconscious processing • May do lots of work that is irrelevant to the goal • BC is goal-driven, appropriate for problem-solving • Complexity of BC can be much less than linear in size of KB
CONCLUSION • Logical agents apply inference to a knowledge baseto derive new information and make decisions • Basicconcepts of logicaresyntax, semantics, entailment,inference,soundnessandcompleteness. • Wumpusworldrequirestheabilitytorepresentpartialandnegatedinformation,reasonbycases. • Resolution is soundandcompleteforpropositionallogic. • Propositionallogiclacksexpressivepower.