610 likes | 765 Views
Course: Engineering Artificial Intelligence. Dr. Radu Marinescu. Lecture 2. Fundamental issues for most AI problems. Representation Search Inference Planning Learning. Representation. Facts about the world have to be represented in some way, e.g., mathematical logic Deals with:
E N D
Course:Engineering Artificial Intelligence Dr. Radu Marinescu Lecture 2
Fundamental issues for most AI problems • Representation • Search • Inference • Planning • Learning
Representation • Facts about the world have to be represented in some way, e.g., mathematical logic • Deals with: • What to represent and how to represent? • How to structure knowledge? • What is explicit and what must be inferred? • How to encode “rules”? • How to deal with incomplete, inconsistent and probabilistic knowledge? • What kinds of knowledge are required to solve problems?
Search • Many tasks can be viewed as searching a very large problem space for a solution • For example, Tic-Tac-Toe has 765 states, Chess has about 250 states, while Go has about 2100 states
Inference • From some facts others can be inferred (related to search) • For example, knowing "All elephants have trunks" and "Clyde is an elephant," can we answer the question "Does Clyde have a trunk?" • What about "Peanuts has a trunk, is it an elephant?" Or "Peanuts lives in a tree and has a trunk, is it an elephant?”
Learning and planning • Learning • Learn new facts about the world: e.g., machine learning • Planning • Starting with general facts about the world, facts about the effects of basic actions, facts about a particular situation, and a statement of a goal generate a strategy for achieving that goal in terms of a sequence of primitive steps or actions
Fundamental issues for most AI problems • Representation • Search • Inference • Planning • Learning
Search • Main idea: Search allows exploring alternatives • Background • State space representation • Uninformed vs. informed • Any path vs. optimal path • Implementation and performance
Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C
Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C Directed Graph (one way streets)
Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C Directed Graph (one way streets) Undirected Graph (two way streets)
Examples of graphs Airline routes Boston San Fran Wash DC Dallas LA
Examples of graphs Airline routes Boston San Fran Wash DC Planning actions (graph of possible states of the world) B Put B on C Dallas LA C C A A B Put C on A A B C Put C on A A Put A on C C C Put C on B B B A
Problem solving paradigm • What are the states? (All relevant aspects of the problem) • Arrangement of parts (to plan an assembly) • Positions of trucks (to plan package distribution) • Cities (to plan a trip) • Set of facts (e.g., to prove a mathematical theorem) • What are the actions(operators)? (deterministic, discrete) • Assemble two parts • Move a truck to a new position • Fly to a new city • Apply a theorem to derive a new fact • What is the goaltest? (Condition for success) • All parts in place • All packages delivered • Reached destination city • Derived goal fact
Example: holiday in Romania • On vacation in Romania, currently in Arad • Flight home leaves tomorrow from Bucharest
Example: holiday in Romania • Goal • Be in Bucharest • State-space • States: various cities • Actions: drive between cities • Solution • Sequence of actions to destination
Solution to the holiday problem Solution: go(Sibiu), go(Fagaras), go(Bucharest) Cost: 140 + 99 + 211 = 450
State-space problem formulation • A problem is defined by 4 items: • Initial state: e.g., in(Arad) • Actions or successorfunction: S(X) = set of action-state pairs • e.g., S(Arad) = {<go(Sibiu), in(Sibiu)>, <go(Zerind), in(Zerind)>, <go(Timisoara), in(Timisoara)>} • Goaltest: e.g., in(Bucharest) • Pathcost (additive) • e.g., sum of distances to drive • A solution is a sequence of actions leading from the initial state to a goal state
Vacuum cleaner state space • States: location of dirt and robot • Initialstate: any • Actions: move robot left, right and suck • Goalstate: no dirt at all locations • Pathcost: 1 per action
8-queen puzzle state space • States:arrangements of n ≤ 8 queens in the leftmost n columns, 1 per column, such that no queen attacks another • Initialstate:no queens on the board • Actions: add queen to the leftmost empty column such that it is not attacked by any other queen • Goalstate:8 queens on the board, none attacked • Pathcost: 1 per action
Sliding tile puzzle state space • States • Initialstate • Actions • Goalstate • Pathcost Try yourselves
Sliding tile puzzle state space • States: locations of tiles • Initialstate: given (left) • Actions: move blank left, right, up, down • Goalstate: given (right) • Pathcost: 1 per action
Search algorithms • Basic idea • Exploration of state space graph by generating successors of already-explored states (a.k.a. expanding states) • Every states is evaluated: is it a goal state?
Terminology • State – Used to refer to the vertices in the underlying graph that is being searched, that is states in the problem domain, for example, a city, an arrangement of blocks or the arrangement of parts in a puzzle • Search node – Refers to the vertices in the searchtree that is being generated by the search algorithm. Each node refers to a state of the world; many nodes may refer to the same state. • Importantly, a node implicitly represents a path (from the start state of the search tree to the state associated with the node). Because search nodes are part of the search tree, they have a unique ancestor node (except for the root node)
Terminology: more details • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree contains info such as: state, parent node, action, path costg(x), depth
Search strategies • A search strategy is defined by picking the order of node expansion • Search strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • timecomplexity: number of nodes generated • spacecomplexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the search tree
Classes of search • Any path search • Uninformed search • Informed search • Optimal path search • Uninformed search • Informed search
Simple search algorithm • A search node is a path from some state X to the start state, e.g., (X B A S) • The state of a search node is the most recent state of the path, e.g., X • Let Q be a list of search nodes, e.g., ((X B A S) (C B A S) …) • Let S be the start state • Initialize Q with search node (S) as only entry, set Visited = (S) • If Q is empty, fail. Else, pick some node N from Q • If state(N) is goal, return N (we’ve reached the goal) • (Otherwise) Remove N from Q • Find all descendants of state(N) not in Visited and create all the one-step extensions of N to each descendant • Add the extended paths to Q; add children of state(N) to Visited • Go to step 2.
Simple search algorithm • A search node is a path from some state X to the start state, e.g., (X B A S) • The state of a search node is the most recent state of the path, e.g., X • Let Q be a list of search nodes, e.g., ((X B A S) (C B A S) …) • Let S be the start state • Initialize Q with search node (S) as only entry, set Visited = (S) • If Q is empty, fail. Else, pick some node N from Q • If state(N) is goal, return N (we’ve reached the goal) • (Otherwise) Remove N from Q • Find all descendants of state(N) not in Visited and create all the one-step extensions of N to each descendant • Add the extended paths to Q; add children of state(N) to Visited • Go to step 2. Critical decisions: Step 2: picking N from Q Step 6: adding extensions of N to Q
Implementing the search strategies • Depth-first • Pick first element of Q • Add path extensions to front of Q • Breadth-first • Pick first element of Q • Add path extensions to end of Q
Terminology • Visited – a state M is first visited when a path to M first gets added to Q. In general, a state is said to have been visited if it has ever shown up in a search node in Q. The intuition is that we have briefly “visited” them to place them in Q, but we have not yet examined them carefully. • Expanded – a state M is expanded when it is the state of a search node that is pulled off of Q. At that point, the descendants of M are visited and the path that led to M is extended to the eligible descendants. We sometimes refer to the search node that led to M (instead of M itself) as being expanded. However, once a node is expanded we are done with it; we will not need to expand it again. In fact, we discard it from Q
Depth-First Pick first element of Q; Add path extensions to front of Q C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 2 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 3 2 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 3 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 3 5 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First Pick first element of Q; Add path extensions to front of Q 3 5 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry
Depth-First: another (easier?) way to see it 1 S B A 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B
Depth-First: another (easier?) way to see it 1 S 2 B A 2 C D 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B
Depth-First: another (easier?) way to see it 1 S 3 2 B A 2 3 C D 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B
Depth-First: another (easier?) way to see it 1 S 3 2 B A 2 3 4 C D 4 NB: C is not visited again 1 C G C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B
Depth-First: another (easier?) way to see it 1 S 3 2 5 B A 2 3 4 C D 4 5 1 C G C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B
Implementing the search strategies • Depth-first • Pick first element of Q • Add path extensions to front of Q • Breadth-first • Pick first element of Q • Add path extensions to end of Q
Breadth-First Pick first element of Q; Add path extensions to end of Q C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry
Breadth-First Pick first element of Q; Add path extensions to end of Q 1 C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry
Breadth-First Pick first element of Q; Add path extensions to end of Q 2 1 C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry
Breadth-First Pick first element of Q; Add path extensions to end of Q 2 1 C G 3 A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry * We could have stopped here, when the first path to the goal was generated