1 / 22

1/12: Problem Solving (Search)

1/12: Problem Solving (Search). Administrative Assignment 1 due Friday Current reading: Chapters 1-3, Rich & Knight Chapters 1-3, Touretzky Chapter 12, Rich & Knight Today Search from Chapter #2. Problem-Solving in AI. Stages of problem solving formulate the problem (subjective)

quemby-kent
Download Presentation

1/12: Problem Solving (Search)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 1/12: Problem Solving (Search) • Administrative • Assignment 1 due Friday • Current reading: • Chapters 1-3, Rich & Knight • Chapters 1-3, Touretzky • Chapter 12, Rich & Knight • Today • Search from Chapter #2

  2. Problem-Solving in AI • Stages of problem solving • formulate the problem (subjective) • goal formulation • problem formulation • solve the formulated problem (objective)

  3. The Classical AI Problem • The agent • no observational power • The environment • no exogenous events or other agents • actions are deterministic transitions • start state is known with certainty • The reward • reach a goal state (sometimes at minimum cost) • for a single trial

  4. To describe a problem • Problem definition • Define the state space • Define the actions and their transition functions • define action/path costs • Define the initial state and goal region • Some examples • Eight puzzle • Missionaries and cannibals • Tower of Hanoi (see with emacs “Esc-x hanoi”)

  5. M M M M M M ? C C C C C C Two example problems 1 2 7 1 2 3 4 3 6 4 5 6 5 8 7 8

  6. Problem solving as graph search • “Every problem is a graph search problem, as long as you can figure out the right graph to search” • Graph vertices = states • Graph arcs = action transitions • Initial vertex and goal vertices identified • Objective: find any/cheapest path from the initial state to any goal state • Note: this is taking us far afield from reactive/observable problems!

  7. Graph search in AI • The size of the state space is such that the graph is infinite or at least too big to hold in memory • incremental construction of the graph is the name of the game

  8. Graph Search in Theory and Practice

  9. Incremental Graph Search: Nondeterministic Version • This is an abstract way to look at the search problem. It highlights where the crucial choice is, and thus where we want to apply “intelligence” • Set N  initial state • Loop • if N is a goal node, terminate with success • choose an action A to apply in N • if there are no such actions, fail • set N  A(N)

  10. Nondeterministic Search • All the smarts is in choose • A variety of ways to implement the “algorithm” • most common is to maintain a list of the “frontier” nodes • a heuristic function ranks them according to how “promising” each looks • promising means something like “how long is the shortest path from here to a goal” • at each iteration the highest ranked node is removed from the frontier, checked for goal-hood, its successors generated

  11. Search: Lisp Implementation (defun basic-search (initial-state final-state-checker state-generator state-comparator) (really-search (list initial-state) final-state-checker state-generator state-comparator)) Function Inputs Output initial-state state final-state-checkerstateBoolean state-generatorstate (list-of state) state-comparatorstatestateBoolean

  12. Lisp Implementation (cont.) (defun really-search (frontier final-state-checker state-generator state-comparator) (cond ((null frontier) NIL) (T (let ((next-state (car frontier))) (cond ((funcall final-state-checker next-state) next-state) (T (let ((new-states (funcall state-generator next-state))) (really-search (sort (append new-states (cdr frontier)) state-comparator) final-state-checker state-generator state-comparator))))))))

  13. Search Strategies • Uninformed • breadth first • depth first • iterative deepening • bi-directional • Informed • greedy • A* • Memory bounded • IDA* • SMA*

  14. Breadth-First Search • Strategy: always prefer the shortest path • consider all solutions of length k before considering any solution of length k+1 • Advantages • complete: will always find a solution if there is one • Disadvantages • memory intensive: frontier grows exponentially with b and d • always takes exponential time to find long solutions

  15. Depth-First Search • Strategy: always prefer the longest path • exhaust a single node on the frontier before considering any of its siblings • Advantages • space efficient: frontier grows only with d and not with b • can find a long solution very quickly (if one chooses well) • Disadvantages • can work on an arbitrary bad path for arbitrarily long • prone to looping (exploring cycles in the graph) • as a practical matter, incomplete • Possible solutions: • loop detection • depth limit

  16. Backward and Bi-Directional Search • Search so far has started at initial and generated successors. • Suppose instead we stared at goal and generated predecessors. • or suppose we started both at initial and at goal, and went in both directions • The argument: fan-out versus fan-in • Difficulties • computing predecessors • bookkeeping to determine when a bi-directional search succeeds

  17. Iterative-Deepening Search • Strategy • use a bounded depth-first approach • but incrementally increase the depth bound if no solution is found depth frontier size 610348 expansions wasted 1 6 2 31 3 156 4 781 5 3906 6 19531 7 97656 8 488281 9 2441406 20% wasted effort (assuming proper depth were known) 3051754 total expansions

  18. Informed Search Methods • “Informed” is exactly the ranking function in the search code • except comparison versus ranking • Usual interpretation: • h’(n)estimated cost of cheapest path from n to any goal • Need to balance the cost so far with expected cost to goal: • g(n) actual minimum cost of getting to n • h’(n)estimated minimum cost getting from n to goal

  19. A* Search • An informed method for finding the minimum-cost path from initial to a goal • The ranking function is simply • f’(n) = g(n) + h’(n)estimated minimum cost to a goal • how does this limit the agent’s reward structure?? • What are the implications of getting h’ wrong? • if h’(n)=h(n) for all n • if h’(n)  h(n) for all n but strictly less than for some n • if h’(n) > h(n) for some n

  20. A*: Properties of the Cost Estimate • If h’(n) is exactlyh(n) for all n, then the search immediately converges to the optimal solution. • If h'(n)=0 for all n, the search is breadth-first. • If h' never overestimates h, and goal states are “correctly identified” then the first goal node found will be optimal. • (All provided that goal states are correctly identified.) • If h' sometimes overestimates h, then the goal node found may be sub-optimal.

  21. A* Search: Concluded • An h’ that never overestimates h is called an admissible search heuristic. • A* search defined to be best-first with an admissible h • Graceful degradation: suppose h’ is not admissible, but doesn’t miss by much? • If the probability that h'(s)>h(s) is less than , then the probability that A* will return an answer that is sub-optimal by more than a factor of  is small.

  22. IDA* • IDA* combines Iterative Deepening and A* • recall ID did depth-first for increasing depth bounds d = 0, 1, ... until a solution was found • IDA does depth-first search using f-bounds instead of depth bounds • depth-first(fb) considers all nodes n such that f(n)<= fb • when a node’s f value exceeds fb it is pruned, but • the f-bound for the next iteration is the minimum over the f values for all the pruned states • Problems when f values are closely spaced • alternative is to increment depth bound by some fixed or adaptive 

More Related