280 likes | 489 Views
Informed Search. Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class . Questions on the HW?. Agenda. A* example Hill climbing Example: n-Queens Online search Depth first Example: maze. A* Search.
E N D
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class
Agenda • A* example • Hill climbing • Example: n-Queens • Online search • Depth first • Example: maze
A* Search • OPEN = start node; CLOSED = empty • While OPEN is not empty do • Remove leftmost state from OPEN, call it X • If X = goal state, return success • Put X on CLOSED • SUCCESSORS = Successor function (X) • Remove any successors on OPEN or CLOSED • Compute f(n)= g(n) + h(n) • Put remaining successors on either end of OPEN • Sort nodes on OPEN by value of heuristic function • End while
Heuristics for other problems Problems • Shortest path from one city to another • Challenge: Is there an admissable heuristic for sodoku?
Local Search Algorithms • Operate using a single current state • Move only to neighbors of the state • Paths followed by search are not retained • Iterative improvement • Keep a single current state and try to improve it
Advantages to local search • Use very little memory – usually a constant amount • Can often find reasonable solutions in large or infinite state spaces (e.g., continuous) • Unsuitable for systematic search • Useful for pure optimatization problems • Find the best state according to an objective function • Traveling salesman
Hill-climbing aka gradient ascent/descent; steepest ascent • Function Hill-Climbing(problem) • Returns a local maximum • Inputs: problem • Local variables: current (a node) neighbor (a node) • Current <- make-node(initial-state[problem]) • Loop do • Neighbor <- highest valued successor of current • If value(neighbor) <- value(current) return state(current) • current <- neighbor • End.
What we think hill-climbing looks like What we learn hill-climbing is Usually like
Problems for hill climbing When the higher the heuristic function the better: maxima (objective fns); when the lower the function the better: minima (cost fns) • Local maxima: A local maximum is a peak that is higher than each of its neighboring states, but lower than the global maximum • Ridges: a sequence of local maxima • Plateaux: an area of the state space landscape where the evaluation function is flat
Hill-climbing search: 8-queens problem • A local minimum with h = 1
Some solutions • Stochastic hill-climbing • Chose at random from among the uphill moves • First-choice hill climbing • Generates successors randomly until one is generated that is better than current state • Simulated annealing • Generate a random move. Accept if improvement. Otherwise accept with continually decreasing probability. • Local beam search • Keep track of k states rather than just 1
Another point of view Wrong peak has been climbed for over 22 yearsAccording to the official Nepalese peak list the highest Trekking Peak, Mera Peak -6654m- has never been climbed. Despite having been a popular climb for over 22 years, all expeditions attacked a peak that has now been revealed as the wrong one. Both new Nepalese maps and the official peak list confirm this new information. A first attempt to climb the new peak was made in May 2000, but the peak remains unclimbed. You can read more in Rock and Ice Super Guide 104 and Climber UK Sept 2000. http://www.abc.net.au/science/k2/moments/s1086384.htm
Avoiding climbing the wrong peak • Random-restart hill climbing • Keep restarting from randomly generated initial states, stopping when goal is found
Online Search • Agent operates by interleaving computation and action • No time for thinking • The agent only knows • Actions (s) • The step-cost function c(s,a,s’) • Goal-test (s) • Cannot access the successors of a state without trying all actions
Assumptions • Agent recognizes a state it has seen before • Actions are deterministic • Competitive ratio: Compare cost that agent actually travels with cost of the actual shortest path
What properties of search are desirable? • Will A* work? • Expand nodes in a local order • Depth first • Variant of greedy search • Difference from offline search: • Agent must physically backtrack • Record states to which agent can backtrack and has not yet explored
Depth-first • OPEN = start node; CLOSED = empty • While OPEN is not empty do • Remove leftmost state from OPEN, call it X • If X = goal state, return success • Put X on CLOSED • SUCCESSORS = Successor function (X) • Remove any successors on OPEN or CLOSED • Put remaining successors on left end of OPEN • End while
Online DFS - setup • Inputs: s’, a percept that identifies the current state Static: • result, a table indexed by action and state, initially empty • unexplored: a table that lists, for each visited state, the actions not yet tried • unbacktracked: a table that lists, for each visited state, the backtracks not yet tried • s,a: the previous state and action, initially null
Online DFS – the algorithm • If Goal-test(s’) then return stop • If s’ is a new state then unexplored[s’] actions(s’) • If s is not null then do • result[a,s]s’, result[reverse(a),s’)-< s • Add s to the front of unbacktracked[s’] • If unexplored[s’] is empty • Then return stop • Else a action b such that result[b,s’]=pop(unbacktracked[s’]) • Else apop(unexplored[s’]) • s s’ • Return a
Learning Real Time A* (LRTA*) • Augment hill-climbing with memory • Store current best estimate of cost from node to goal: H(s) • Initially, H(s) = h(s) • Update H(s) through experience • Estimated cost to reach the goal through neighbor s’ • H(s) = c(s,a,s’)+ H(s’)