370 likes | 624 Views
Problem-solving as search. History. Problem-solving as search – early insight of AI. Newell and Simon’s theory of human intelligence and problem-solving. Early examples: 1956: Logic Theorist (Allen Newell & Herbert Simon) 1958: Geometry problem solver (Herbert Gelernter)
E N D
History • Problem-solving as search – early insight of AI. • Newell and Simon’s theory of human intelligence and problem-solving. • Early examples: • 1956: Logic Theorist (Allen Newell & Herbert Simon) • 1958: Geometry problem solver (Herbert Gelernter) • 1959: General Problem Solver (Herbert Simon & Alan Newell) • 1971: STRIPS (Stanford Research Institute Problem Solver, Richard Fikes & Nils Nilsson)
Real-World Problem-Solving as Search • Examples: • Route/Path finding: Robots, cars, cell-phone routing, airline routing, characters in video games, … • Layout of circuits • Job-shop scheduling • Game playing (e.g., chess, go) • Theorem proving • Drug design
2 8 3 1 6 4 7 5 Classic AI Toy Problem: 8-puzzle initial state goal state 1 2 3 8 4 7 6 5 Notion of “searching a state space” Pictures from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
2 8 3 1 6 4 7 5 2 8 3 2 8 3 2 8 3 1 4 1 6 4 1 6 4 7 5 7 5 6 7 5 2 8 3 2 8 3 2 3 2 8 3 6 4 1 4 1 4 1 4 8 6 1 7 5 7 5 7 5 7 5 6 6 8-puzzle search tree Pictures from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
What is size of state space for 8-puzzle? Size of state space 9! = 181,440 • Size of 15-puzzle state space? 16! = 2 x 1013 • Size of 24-puzzle state space? 25! = 1.5 x 1025
What is size of state space for 8-puzzle? Size of state space 9! = 181,440 • Size of 15-puzzle state space? 16! = 2 x 1013 • Size of 24-puzzle state space? 25! = 1.5 x 1025 • Can’t do exhaustive search!
Approximate number of states • Tic-Tac-Toe: 39 • Checkers: 1040 • Rubik’s cube: 1019 • Chess: 10120
In general, a search problem is formalized as : • state space • special start and goal state(s) • operators that perform allowable transitions between states • cost of transitions • All these can be either deterministic or probabilistic.
State space as a tree/graph • Search as tree search • Solutions: “winning” state, or path to winning state
How to solve a problem by searching • Define search space • Initial, goal, and intermediate states • Define operators for expanding a given state into its possible successor states • Defines search tree • Apply search algorithm (tree search) to find path from initial to goal state, while avoiding (if possible) repeating a state during the search. • Solution is • path from initial to goal state (e.g., traveling salesman problem) • or, simply a goal state, which might not be initially known (e.g., drug design)
Missionaries and cannibals • Three missionaries and three cannibals are on the left bank of a river. • There is one canoe which can hold one or two people. • Find a way to get everyone to the right bank, without ever leaving a group of missionaries in one place outnumbered by cannibals in that place. From http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Missionaries and cannibals • Three missionaries and three cannibals are on the left bank of a river. • There is one canoe which can hold one or two people. • Find a way to get everyone to the right bank, without ever leaving a group of missionaries in one place outnumbered by cannibals in that place. How to set this up as a search problem? From http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Missionaries and cannibals • State space: • Size? • Initial state: • Goal state: • Operators: • Cost of transitions: • Search tree:
Drug design • Example: Search for sequence of up to N amino acids that forms protein shape that matches a particular receptor on a pathogen. • (Note: There are 20 amino acids to choose from at each locus in the string.)
Drug design • State space: • Size? • Initial state: • Goal state: • Operators: • Cost of transitions: • Search tree:
Search Strategies A strategy is defined by picking the order of node expansion. Strategies are evaluated along the following dimensions: • completeness – does it always find a solution if one exists? • optimality– does it always find a optimal (least-cost or highest value) solution? • time complexity– number of nodes generated/expanded • space complexity– maximum number of nodes in memory Time and space complexity are often measured in terms of: b – maximum branching factor of the search tree d – depth of the least-cost solution m – maximum depth of the state space (may be infinite) Adapted from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Search methods Simulated annealing Genetic algorithm Tabu search Ant colony optimization Adversarial search: Minimax with alpha-beta pruning • Uninformed search: • Breadth-first • Depth-first • Depth-limited • Iterative deepening depth-first • Bidirectional • Informed (or heuristic) search (deterministic or stochastic): • Greedy best-first • A* (and many variations) • Hill climbing
Uninformed strategies • Breadth-first: Expand all nodes at depth d before proceeding to depth d+1 • Depth-first: Expand deepest unexpanded node • Depth-limited: Depth-first search with a cutoff at a specified depth limit • Iterative deepening: Repeated depth-limited searches, starting with a limit of zero and incrementing once each time http://www.cse.unl.edu/~choueiry/S03-476-876/searchapplet/index.html
Uninformed Search Properties • Breadth-first: Complete? Optimal? Time? Space? • Depth-first: Complete? Optimal?Time?Space? • Depth-limited: Complete?Optimal?Time?Space? • Iterative deepening: Complete? Optimal?Time?Space?
Informed (heuristic) Search • What is a “heuristic”? • Examples: • 8 puzzle • Missionaries and Cannibals • Tic Tac Toe • Traveling Salesman Problem • Drug design
Best-first greedy search • current state = initial state • Expand current state • Evaluate offspring states s with heuristic h(s), which estimates cost of path from s to goal state • current state = argminsh(s) for s offspring(current state) • If current state ≠ goal state, go to step 2. • http://alumni.cs.ucr.edu/~tmatinde/projects/cs455/TSP/heuristic/Travellinganimation.htm
Search Terminology • Completeness • solution will be found, if it exists • Optimality • least cost solution will be found • Admissable heuristic h • s, h never overestimates true cost from state s to goal state • Best first greedy search: Complete? Optimal? • 8-puzzle heuristics: Hamming distance, Manhattan distance: Admissible? • Example of non-admissable heuristic for 8-puzzle?
A* Search • Uses evaluation function f (n)= g(n)+ h(n) • where n is a node. • g is a cost function • Total cost incurred so far from initial state at node n • h is an heuristic • Best first search is A* with g = 0.
h1(start state) = h2(start state) =
A* Pseudocode • give code and show example on 8-puzzle
A* Pseudocode • create the open list of nodes, initially containing only our starting node • create the closed list of nodes, initially empty • while (we have not reached our goal) { • consider the best node in the open list (the node with the lowest f value) • if (this node is the goal) { then we're done } • else { • move the current node to the closed list and consider all of its successors • for (each successor) { • if (this suceessor is in the closed list and our current g value is lower) { • update the successor with the new, lower, g value • change the sucessor’s parent to our current node } • else if (this successor is in the open list and our current g value is lower) { • update the suceessor with the new, lower, g value • change the sucessor’s parent to our current node } • else this sucessor is not in either the open or closed list { • add the successor to the open list and set its g value } } } } Adapted from: http://en.wikibooks.org/wiki/Artificial_Intelligence/Search/Heuristic_search/Astar_Search#Pseudo-code_A.2A
Proof of Optimality of A* • Suppose a suboptimal goal G2 has been generated and is in the OPEN list. • Let n be an unexpanded node on a shortest path to an optimal goal G1. f(G2) = g(G2) since h(G2) = 0 g(G1) since G2 is suboptimal f(G2) f(n) since h is admissible Since f(G2) f(n), A* will never select G2 for expansion start n G2 G1
Variations of A* • IDA* (iterative deepening A*) • ARA* (anytime repairing A*) • D* (dynamic A*)
Example of Simulated Annealing • Netlogo simulation
Simulated Annealing is complete (if you run it for a long enough time!)
Genetic Algorithms • Similar to hill-climbing, but with a population of “initial states”, and stochastic mutation and crossover operations for search.