230 likes | 403 Views
CSC3203: AI for Games Informed search (1). Patrick Olivier p.l.olivier@ncl.ac.uk. Depth-first. Iterative deepening. Breadth-first. Uninformed search: summary. Informed search strategies. Best-first search Greedy best-first search A* search Local beam search Simulated annealing search
E N D
CSC3203: AI for GamesInformed search (1) Patrick Olivier p.l.olivier@ncl.ac.uk
Depth-first Iterative deepening Breadth-first Uninformed search: summary
Informed search strategies • Best-first search • Greedy best-first search • A* search • Local beam search • Simulated annealing search • Genetic algorithms
Best-first search • Search strategy defined by: • order of node expansion • Uniform cost search uses cost so far: g(n) • Best first search uses: • evaluation function: f(n) • expand most desirable unexpanded node • implementation: order fringe by decreasing f(n) • For example: • greedy best-first search • A* search
Greedy best-first search • Evaluation function f(n) = h(n) • Heuristics are rules-of-thumb that are likely (but not guaranteed) to help in problem solving • For example: • hSLD(n) = straight-line distance from n to Bucharest • Greedy best-first search expands the node that appears to be closest to goal
Straight line distance to Bucharest Greedy search: Arad Bucharest
Properties of greedy search • Complete? • In finite space if modified for repeated states • (Consider Iasi to Fagaras: false start to Neamt rather than Vaslui, and must prevent repeated states to avoid infinite Iasi/Neamt transitions.) • Time: • O(bm): good heuristic dramatic improvement • Space: • O(bm) keeps all nodes in memory • Optimal? • No
A* search • Idea: don’t just use estimate of cost to the goal, but the cost of paths so far • Evaluation function: f(n) = g(n) + h(n) • g(n) = cost so far to reach n • h(n) = estimated cost from n to goal • f(n) is estimated cost of path through n to goal
SLD to Bucharest Class exercise: Arad Bucharest using A* and the staight-line distance as hSLD(n)
Admissible heuristics • A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. • An admissible heuristic never overestimates the cost to reach the goal, i.e. it is optimistic • Example: hSLD(n) (never overestimates the actual road distance) • Theorem: If h(n) is admissible, A* using tree-search is optimal
Proof of A* optimality of A* Suppose some suboptimal goal G2has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) …since h(G2) = 0 g(G2) > g(G) …since G2 suboptimal f(G) = g(G) …since h(G) = 0 f(G2) > f(G) …from above f(n) = g(n) + h(n)
Proof of A* optimality of A* Suppose some suboptimal goal G2has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) …from above h(n) ≤ h*(n) ...since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion
Properties of A* search • Complete? • Yes – unless there are infinitely many nodes f(n) ≤ f(G) • Time? • Exponential unless error in the heuristic grows no faster that the logarithm of the actual path cost • A* is as optimally efficient (no other algorithm is guaranteed to expand less nodes for the same heuristic) • Space? • All nodes in memory (same as time complexity) • Optimal? • Yes