860 likes | 1.03k Views
Problem Solving. Russell and Norvig: Chapter 3 CSMSC 421 – Fall 2006. sensors. environment. ?. agent. actuators. Problem-Solving Agent. sensors. environment. ?. agent. actuators. Formulate Goal Formulate Problem States Actions Find Solution. Problem-Solving Agent.
E N D
Problem Solving Russell and Norvig: Chapter 3 CSMSC 421 – Fall 2006
sensors environment ? agent actuators Problem-Solving Agent
sensors environment ? agent actuators • Formulate Goal • Formulate Problem • States • Actions • Find Solution Problem-Solving Agent
Holiday Planning • On holiday in Romania; Currently in Arad. Flight leaves tomorrow from Bucharest. • Formulate Goal: Be in Bucharest • Formulate Problem: States: various cities Actions: drive between cities • Find solution: Sequence of cities: Arad, Sibiu, Fagaras, Bucharest
States Actions Solution Goal Start Problem Solving
Problem-solving agent • Four general steps in problem solving: • Goal formulation • What are the successful world states • Problem formulation • What actions and states to consider given the goal • Search • Determine the possible sequence of actions that lead to the states of known values and then choosing the best sequence. • Execute • Give the solution perform the actions.
Problem-solving agent function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action static: seq, an action sequence state, some description of the current world state goal, a goal problem, a problem formulation state UPDATE-STATE(state, percept) ifseq is empty then goal FORMULATE-GOAL(state) problem FORMULATE-PROBLEM(state,goal) seq SEARCH(problem) action FIRST(seq) seq REST(seq) return action
Assumptions Made (for now) • The environment is static • The environment is discretizable • The environment is observable • The actions are deterministic
Problem formulation • A problem is defined by: • An initial state, e.g. Arad • Successor functionS(X)= set of action-state pairs • e.g. S(Arad)={<Arad Zerind, Zerind>,…} intial state + successor function = state space • Goal test, can be • Explicit, e.g. x=‘at bucharest’ • Implicit, e.g. checkmate(x) • Path cost (additive) • e.g. sum of distances, number of actions executed, … • c(x,a,y) is the step cost, assumed to be >= 0 A solution is a sequence of actions from initial to goal state. Optimal solution has the lowest path cost.
Selecting a state space • Real world is absurdly complex. State space must be abstracted for problem solving. • State = set of real states. • Action = complex combination of real actions. • e.g. Arad Zerind represents a complex set of possible routes, detours, rest stops, etc. • The abstraction is valid if the path between two states is reflected in the real world. • Solution = set of real paths that are solutions in the real world. • Each abstract action should be “easier” than the real problem.
Example: vacuum world • States?? • Initial state?? • Actions?? • Goal test?? • Path cost??
Example: vacuum world • States?? two locations with or without dirt: 2 x 22=8 states. • Initial state?? Any state can be initial • Actions?? {Left, Right, Suck} • Goal test?? Check whether squares are clean. • Path cost?? Number of actions to reach goal.
Example: 8-puzzle • States?? • Initial state?? • Actions?? • Goal test?? • Path cost??
Example: 8-puzzle • States?? Integer location of each tile • Initial state?? Any state can be initial • Actions?? {Left, Right, Up, Down} • Goal test?? Check whether goal configuration is reached • Path cost?? Number of actions to reach goal
8 2 1 2 3 3 4 7 4 5 6 5 1 6 7 8 Initial state Goal state Example: 8-puzzle
8 2 8 2 7 3 4 7 3 4 5 1 6 5 1 6 8 2 8 2 3 4 7 3 4 7 5 1 6 5 1 6 Example: 8-puzzle
0.18 sec 6 days 12 billion years 10 million states/sec Example: 8-puzzle Size of the state space = 9!/2 = 181,440 15-puzzle .65 x1012 24-puzzle .5 x1025
Example: 8-queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solution Not a solution
Example: 8-queens problem Incremental formulation vs. complete-state formulation • States?? • Initial state?? • Actions?? • Goal test?? • Path cost??
Example: 8-queens • Formulation #1: • States: any arrangement of • 0 to 8 queens on the board • Initial state: 0 queens on the • board • Actions: add a • queen in any square • Goal test: 8 queens on the • board, none attacked • Path cost: none 648 states with 8 queens
Example: 8-queens • Formulation #2: • States: any arrangement of • k = 0 to 8 queens in the k • leftmost columns with none • attacked • Initial state: 0 queens on the • board • Successor function: add a • queen to any square in the leftmost empty column such that it is not attacked • by any other queen • Goal test: 8 queens on the • board 2,067 states
Real-world Problems • Route finding • Touring problems • VLSI layout • Robot Navigation • Automatic assembly sequencing • Drug design • Internet searching • …
Example: robot assembly • States?? • Initial state?? • Actions?? • Goal test?? • Path cost??
Example: robot assembly • States?? Real-valued coordinates of robot joint angles; parts of the object to be assembled. • Initial state?? Any arm position and object configuration. • Actions?? Continuous motion of robot joints • Goal test?? Complete assembly (without robot) • Path cost?? Time to execute
Basic search algorithms • How do we find the solutions of previous problems? • Search the state space (remember complexity of space depends on state representation) • Here: search through explicit tree generation • ROOT= initial state. • Nodes and leafs generated through successor function. • In general search generates a graph (same state through multiple paths)
Simple Tree Search Algorithm function TREE-SEARCH(problem, strategy) return solution or failure Initialize search tree to the initial state of the problem do if no candidates for expansion thenreturnfailure choose leaf node for expansion according to strategy if node contains goal state thenreturnsolution else expand the node and add resulting nodes to the search tree enddo
Exercise #1: Getting an Intro • Aka: the art of schmoozing…
Take home points • Difference between State Space and Search Tree
Search of State Space search tree
Take home points • Difference between State Space and Search Tree • Blind Search • Learn names,
State space vs. search tree • Astate is a (representation of) a physical configuration • A node is a data structure belong to a search tree • A node has a parent, children, … and ncludes path cost, depth, … • Here node= <state, parent-node, action, path-cost, depth> • FRINGE= contains generated nodes which are not yet expanded.
Tree search algorithm function TREE-SEARCH(problem,fringe) return a solution or failure fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe INSERT-ALL(EXPAND(node, problem), fringe)
Tree search algorithm (2) function EXPAND(node,problem) return a set of nodes successors the empty set for each <action, result> in SUCCESSOR-FN[problem](STATE[node]) do s a new NODE STATE[s] result PARENT-NODE[s] node ACTION[s] action PATH-COST[s] PATH-COST[node]+ STEP-COST(node, action,s) DEPTH[s] DEPTH[node]+1 add s to successors returnsuccessors
Search Strategies • A strategy is defined by picking the order of node expansion • Performance Measures: • Completeness – does it always find a solution if one exists? • Time complexity – number of nodes generated/expanded • Space complexity – maximum number of nodes in memory • Optimality – does it always find a least-cost solution • Time and space complexity are measured in terms of • b – maximum branching factor of the search tree • d – depth of the least-cost solution • m – maximum depth of the state space (may be ∞)
Uninformed search strategies • (a.k.a. blind search) = use only information available in problem definition. • When strategies can determine whether one non-goal state is better than another informed search. • Categories defined by expansion algorithm: • Breadth-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search. • Bidirectional search
1 2 3 4 5 6 7 Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue FRINGE = (1) 8 9
1 2 3 4 5 6 7 Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue FRINGE = (2, 3) 8 9
1 2 3 4 5 6 7 Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue FRINGE = (3, 4, 5) 8 9
1 2 3 4 5 6 7 Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue FRINGE = (4, 5, 6, 7) 8 9
Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue 1 FRINGE = (5, 6, 7, 8) 2 3 4 5 6 7 8 9
Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue 1 FRINGE = (6, 7, 8) 2 3 4 5 6 7 8 9
Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue 1 FRINGE = (7, 8, 9) 2 3 4 5 6 7 8 9
Breadth-First Strategy • Expand shallowest unexpanded node • Implementation: fringe is a FIFO queue • New nodes are inserted at the end of the queue 1 FRINGE = (8, 9) 2 3 4 5 6 7 8 9