1 / 17

Evaluating Search Strategies: Completeness, Optimality, and Complexity

Explore different search strategies like Heuristic Search, Greedy Best-First Search, and A* Search to find solutions efficiently by considering completeness, optimality, time complexity, and space complexity. Understand heuristic functions and their impact on search algorithms.

cbutler
Download Presentation

Evaluating Search Strategies: Completeness, Optimality, and Complexity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College

  2. Evaluating Search Strategies • Completeness • Will a solution always be found if one exists? • Optimality • Will the optimal (least cost) solution be found? • Time Complexity • How long does it take to find the solution? • Often represented by # nodes expanded • Space Complexity • How much memory is needed to perform the search? • Represented by max # nodes stored at once

  3. Comparison of Strategies (Adapted from Figure 3.17, p. 81)

  4. Avoiding Repeated States • All visited nodes must be saved to avoid looping • Closed list: all expanded nodes • Open list: fringe of unexpanded nodes • If the current node matches a node on the closed list, it is discarded • In some algorithms, the new node might be better, and the old one discarded

  5. Partial Information • Sensorless problems • Multiple initial states, multiple next states • Consider all for a solution, e.g. table tipping • Contingency problems • New information after each action • E.g. adversarial (multiplayer games)

  6. Informed Search Strategies • Also called heuristic search • All are variations of best-first search • The next node to expand is the one “most likely” to lead to a solution • Priority queue, like uniform cost search, but priority is based on additional knowledge of the problem • The priority function for the priority queue is usually called f(n)

  7. Heuristic Function • Heuristic, from Greek for “good” • Heuristic function, h(n) = estimated cost from the current state to the goal • Therefore, our best estimate of total path cost is g(n) + h(n) • Recall, g(n) is cost from initial state to current state • If we treat uniform cost search as a heuristic search, f(n) = g(n) and h(n) = 0

  8. Algorithms • Greedy best-first search • Always expand the node closest to the goal • f(n) = h(n) • A* Search • Expand the node with the best estimated total cost • f(n) = g(n) + h(n) • The fun part is dealing with loops when a node being expanded matches a node on the open or closed list.

  9. Greedy Best-First Search • Like depth-first search, it tends to want to stay on a path, once chosen • Doesn’t like solutions that require you to move away from the goal to get there • Example: 8 puzzle • Non-optimal • Worst-case exponential, but good heuristic function helps! • Also called “hill climbing” (but traditional hill climbing doesn’t backtrack)

  10. A* Search Algorithm • put initial state (only) on OPEN • Until a goal node is found: • If OPEN is empty, fail • BESTNODE := best node on OPEN (lowest (g+h) value) • if BESTNODE is a goal node, succeed • move BESTNODE from OPEN to CLOSED • generate successors of BESTNODE

  11. A* (cont) • For each SUCCESSOR in successor list set SUCCESSOR's parent to BESTNODE (back link for finding path later) compute g(SUCCESSOR) = g(BESTNODE) + cost of link from BESTNODE to SUCCESSOR if SUCCESSOR is the same as a node on OPEN OLD := node on OPEN that is same as SUCCESSOR add OLD to list of BESTNODE's successors if g(SUCCESSOR) < g(OLD) (newly found path is cheaper) reset OLD’s parent to BESTNODE g(OLD) = g(SUCCESSOR)

  12. A* (cont) else if SUCCESSOR is the same as a node on CLOSED OLD := node on CLOSED that is same as SUCCESSOR add OLD to list of BESTNODE's successors if g(SUCCESSOR) < g(OLD) (newly found path is cheaper) reset OLD's parent to BESTNODE g(OLD) = g(SUCCESSOR) Update g() of all successors of OLD else add SUCCESSOR to list of BESTNODE's successors

  13. Updating successors of a closed node if NODE is empty, return For each successor of NODE if the parent of the successor is NODE then g(successor) = g(NODE) + path cost update_successors(successor) else if g(NODE) + path cost < g(successor) then g(successor) = g(NODE) + path cost update_successors(successor)

  14. A* Example(h = true cost) A 12 D 1 11 8 2 E 4 3 I B C 11 8 0 1 4 4 2 7 F 5 4 H 2 2 1 G

  15. A* Search Example(h underestimates true cost) A 10 D 1 11 8 2 E 4 3 I B C 5 8 0 1 4 4 2 7 F 5 2 H 2 2 1 G

  16. Better h means better search • When h = cost to the goal, • Only nodes on correct path are expanded • Optimal solution is found • When h < cost to the goal, • Additional nodes are expanded • Optimal solution is found • When h > cost to the goal • Optimal solution can be overlooked

  17. Comments on A* search • A* is optimal if h(n) is admissable, -- if it never overestimates distance to goal • We don’t need all the bookkeeping if h(n) is consistent -- if the distance of a successor node is never more than the distance of the predecessor – the cost of the action. • If g(n) = 0, A* = greedy best-first search • If g(n) = k, h(n)=0, A* = breadth-first search

More Related