1.96k likes | 1.98k Views
Explore local search algorithms, such as hill climbing search, simulated annealing, and genetic algorithms, to find optimal solutions in large or infinite state spaces for optimization problems.
E N D
Chapter 04 Artificial Intelligence Informed search algorithms and Beyond Classical Search
Beyond Classical Search • Local search algorithms/Iterative Improvement Algorithms and Optimization Problems • Hill-Climbing Search • Simulated Annealing • Genetic Algorithms
Beyond Classical Search • We have seen methods that systematically explore the search space, possibly using principled pruning (e.g., A*) • Achieved by keeping one or more paths in memory and • by recordingwhich alternatives have been explored at each point along the path. • When a goal is found, the path to the goal also constitutes a solution to the problem.
Beyond Classical Search • The best of these methods can handle search spaces of up to 10100 ( ) states / ~1,000 binary variables (i.e. 1000 bits, ballpark figure) • What if much larger search spaces are needed? • Search spaces for some real-world problems may be much larger; e.g., 1030,000 states as in certain reasoning and planning tasks • Some of these problems can be solved by Iterative Improvement Methods
Local search /Iterative Improvement Algorithms • In many optimization problems the goal state itself is the solution • Just want to reach goal state (find a goal) • the solution path to the goalis irrelevant • The state space is a set of complete configurations • Search is about finding • the optimalconfiguration (as in TSP) or • a feasibleconfiguration (as in scheduling problems), or • at least one that satisfies goal constraints, e.g., n-queens – reaching the final configuration of queens.) • For these cases, use local search (iterative improvement) methods.
Local Search/ Iterative Improvement Algorithms – what they are? • Generally, they : • operate on a single current node (rather than multiple paths) • moveonly to the neighbors of that node in an effort to improve a solution. • Do not retained the paths followed by the search. • Although not systematic, local search has two advantages • Use very little, constant amount of memory. • Often can find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.
Local Search Algorithms – what they are? • Generally, they : • Are useful to finding goals, and • Are useful for solving pure optimization problems • The aim is to find the best state according to the evaluation (or objective) function h. • An evaluation (or objective) function h must be available that measures the quality of each state • Main Idea: Start with a random initial configuration and make small, local changes to it that improve its quality • Many optimization problems (without goal test or path cost for these problems) are not fit the “standard” search model in chapter 03.
Local Search: The State-Space Landscape • Goal is to find global minimum (minimize cost) or Global maximum (maximize objective function). Figure 4,1 A one-dimensional state-space landscape in which elevation corresponds to the objective function. The aim is to find the global maximum. Hill-climbing search modifies the current state to try to improve it, as shown by the arrow.
Local Search: The State-Space Landscape • To understand local search, consider the state-space landscape, as shown in Figure 4.1. • A landscape has both • “location” (defined by the state) and • “elevation” (defined by the value of the heuristic cost function or objective function). • If elevation corresponds to cost, then the aim is to find the lowest valley – a global minimum; • if elevation corresponds to an objective function, then the aim is to find the highest peak – a global maximum. • Local search algorithms explore this landscape. • A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum.
Local Search: The State-Space Landscape • Ideally, the evaluation function h should be monotonic: the closer a state to an optimal goal state, the better its h-value. • Each state can be seen as a point on a surface. • The search consists in moving on the surface, looking for its highest peaks (or, lowest valleys): the optimal solutions. evaluation current state
Iterative Improvement Algorithms: Hill Climbing Search Algorithm (steepest-ascent version) • Hill-climbing search (HC) is the most basic local search technique. It is simply a loop that: • At each step, move towards the increasing value (i.e., uphill). • Stop when there are no larger successors. (i.e., it reaches a “peak” when no neighbor has a higher value.) • The algorithm do not maintain a search tree, so • use a single node (as the current node) and • update its state and value (of the objective function), as better states are discovered. • Does not look ahead beyond the immediate neighbors of the current state.
Hill Climbing Search Aka: gradient descent/ascentsearch “Like climbing Everest in thick fog with amnesia” • Steepest ascent version: Receives: problem Returns: state that is local maximum The algorithm is given in Figure 4.2
Aka: gradient descent/ascentsearch “Like climbing Everest in thick fog with amnesia” Hill Climbing Search functionHILL-CLIMBING(problem) returns a state that is a local maximum currentMAKE-NODE(problem.INITIAL-STATE); loop do neighbor a highest-valued successor of current; ifneighbor.VALUEcurrent.VALUEthen return current.STATE; currentneighbor; Figure 4.2 The hill-climbing search algorithm, which is the most basic local search technique. At each step the current node is replaced by the best neighbor; in this version, that means the neighbor with the highest VALUE, but if a heuristic cost estimate h is used, we would find the neighbor with the lowest h.
Travelling salesman problem • The travelling salesman problem (TSP) asks the following question: • "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" • an NP-hard problem in combinatorial optimization, • important in operations research and theoretical computer science.
Local Search Example: TSP TSP: Travelling Salesperson Problem h = length of the tour Strategy: Start with any complete tour, perform pairwise exchanges 0 Variants of this approach get within 1% of optimal very quickly with thousands of cities
Example: TSP c1 • Traveling salesman • Start with any complete tour • Operator: Perform pairwise exchanges 9 5 10 c3 6 c2 3 9 c4 TRAVELING SALESMAN PROBLEM Tour: c1, c3, c4, c2 Cost = 17
Local Search Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal • h = number of conflicts • Strategy/Operator: Move a queen to reduce number of conflicts h = 5 h = 2 h = 0 • Almost always solves n-queens problems (nearly) instantaneously for very large n, e.g., n = 106
Local Search Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal • h = number of conflicts • Strategy/Operator: Move a queen to reduce number of conflicts h = 5 h = 2 h = 1 h = 0 • Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 106
Local Search Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal • h = number of conflicts • Strategy/Operator: Move a queen to reduce number of conflicts h = 5 h = 5 h = 3 h = 0 • Almost always solves n-queens problems (nearly) instantaneously for very large n, e.g., n = 106
Local Search Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal • h = number of conflicts • Strategy/Operator: Move a queen to reduce number of conflicts h = 5 h = 3 h = 1 h = 0 • Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 106
Local Search Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal • h = number of conflicts • Strategy/Operator: Move a queen to reduce number of conflicts h = 5 h = 5 h = 3 h = 0 • Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 106
Iterative Improvement Algorithms The 8-queens problem: • The goal of the 8-queens problem is • to place eight queens on a chessboard such that no queen attacks directly or indirectly any other. • Only the final state counts, regardless the path cost. • Two main kinds of formulation: • An incremental-state formulation • involves operators that augment the state description by, • starting with an empty states; • adding a queen to the state for each action. • A complete-stateformulation • starts with all 8 queens on the board and • moves them around.
Iterative Improvement Algorithms The 8-queens problem: • The incremental-state formulation • States: a state is any arrangement of 0 to 8 queens on the board. • Initial state: no queens on the board. • Actions: Add a queen to any empty square. • Transition model: Returns the board with a queen added to the specified square. • Goal state: 8 queens on the board, none attacked. • There is 64!-56! Possible sequences to investigate. • Can this to be improved? Q: How do you specify states?
An incremental-state formulation • Formulation #1 • States: all arrangements of 0, 1, 2, ..., 8 queens on the board • Initial state: 0 queens on the board • Successor function: each of the successors is obtained by adding one queen in an empty square • Path cost: irrelevant • Goal test: 8 queens are on the board, with no queens attacking each other • 64x63x...x57 1.8x1014 states (possible sequences to investigate.) …
Iterative Improvement Algorithms #1 is to place a queen in any open location. #2 is to place a queen in order from left to right columns. The 8-queens problem: Formulation #2 • Can the incremental-state formulation be improved? • A better formulation would prohibit placing a queen in any square that is already attacked. • States: All possible arrangement of n queens (0 n 8), one per column in the leftmost n columns, with no queen attacking another. • Action: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. • This formulation reduces the 8-queens state space from 64! 1.8 * 1014 to 2057.
An incremental-state formulation • Formulation #2 • States: all arrangements of k = 0, 1, 2, ..., 8 queens in the k leftmost columns with no two queens attacking each other • Initial state: 0 queens on the board • Successor function: each successor is obtained by adding one queen in any square that is not attacked by any queen already in the board, in the leftmost empty column • Path cost: irrelevant • Goal test: 8 queens are on the board • 2,057 states …
n-Queens Problem • A solution is a goal node, not a path to this node (typical of design problem) • Number of states in state space: • 8-queens 2,057 • 100-queens 1052 • But techniques exist to solve n-queens problems efficiently for large values of n • They exploit the fact that there are many solutions well • distributed in the state space
n-Queens Problem – illustrate hill climbing • Use a complete-state formulation - Typically used by local search algorithms. • states are potential solutions. • For 8-queens problem, each state has 8 queens on the board, one per column. • For the successor function, the successors of a state are all possible states generated by moving a single queen to another square in the same column (so each state has 8 *7 = 56 successors). • The heuristic cost function h is the number of pairs of queens that are attacking each other, either directly or indirectly. • The global minimum of this function is zero, which occurs only at perfect solutions.
A complete-state formulation: Figure 4.3(a) shows a state with h = 17. The figure also shows the values of all its successors, with the best successor having h = 12. Hill-climbing algorithm typically choose randomly among the set of best successors if there is more than one.
Hill-climbing search: 8-queens problem Fig 4.3 (a) An 8-queens state with heuristic cost estimate h = 17. The value of h for each possible successor state is obtained by moving a queen with its column. The best moves are marked to h = 12. • h = number of pairs of queens that are attacking each other, either directly or indirectly • h = 17 for the above state
Hill-climbing search: 8-queens problem 3 3 Fig 4.3 (b) A local minimum in the 8-queens state space; the state has h=1 but every successor has a higher cost. 3 3 2 3 2 4 3 3 5 3 3 3 4 3 3 • A local minimum with h = 1. The state has h = 1, but all successors higher, local minimum
Successors function • Use a complete-state formulation: • Typically used by local search algorithm. • states are potential solutions. • For the 8-queens problem • Each state has 8 queens on the board, one per column. • The successors of a state are all possible states generated by moving a single queen to another square in the same column (so each state has 8 *7 = 56 successors). • A successor function (transition model): Given a state, generates its successor states
A successor function (transition model): Given a state, generates its successor states … There are 8 possibilities where the queen could be placed in the first column … (7)
A successor function (transition model): Given a state, generates its successor states … There are 8 possibilities where the queen could be placed in the first column … (7)
A successor function (transition model): Given a state, generates its successor states … There are 8 possibilities where the queen could be placed in the first column … (7)
A successor function (transition model): Given a state, generates its successor states … There are 8 possibilities where the queen could be placed in the first column … (7)
Use a complete-state formulation – … The heuristic cost function his the number of pairs of queens that are attacking each other, either directly or indirectly. The global minimum of this function is zero, which occurs only at perfect solutions (i.e., when have an actual solution).
Hill-climbing search: 8-queens problem 3 Fig 4.3 (b) A local minimum in the 8-queens state space; the state has h=1 but every successor has a higher cost. 3 2 3 3 3 4 3 • A local minimum with h = 1
Hill Climbing (a.k.a gradient ascent/descent search) Summary • “Like climbing Mount Everest in thick fog with amnesia” function HILL-CLIMBING (problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current MAKE_NODE(INITIAL-STATE(problem)) loop do neighbor a highest-valued successor of current if VALUE(neighbor) < VALUE(current) then return STATE(current) currentneighbor end Figure 4.2 The hill climbing algorithm.
Hill Climbing (gradient ascent/descent search) Summary • Fig 4.2 The hill-climbing search algorithm which is the most basic local search technique. • The algorithm does not maintain a search tree, so the data structure for the current node need only record the state and the value of the objective function. • Hill climbing does not look ahead beyond the immediate neighbors of the current state.
Hill Climbing (gradient ascent/descent search) Summary • Fig 4.2 The hill-climbing search algorithm which is the most basic local search technique. • At each step the current node is replaced by the best neighbor; • in this version, that means the neighbor with the highest VALUE. • If a heuristic cost estimate h is used, we would have find the neighbor with the lowest h. • It is simply a loop that continually moves in the direction of increasing value – that is, uphill. It terminates when it reaches a “peak” where no neighbor has a higher value.
Hill-Climbing: Shortcomings • Depending on the initial state, it can get stuck on local maxima • It may converge very slowly • In continuous spaces, choosing the step-size is non-obvious However, true local optima are surprisingly rare in high-dimensional spaces. There often is an escape to a better state
Hill Climbing Search - (Greedy Local Search) • HC sometimes called greedy local search, since it grabs a good neighbor without thinking about where to go next. • Works well in some cases. e.g., can get from 4.3(a) to (b) in 5 steps. Unfortunately often gets stuck. For random 8-queens problems, 86% get stuck and 14% are solved, but only 3-4 steps on average to find out – not bad for state space with 88 17 million states
Figure 4.1 A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from which no uphill exit exists, or a shoulder. From which progress is possible. A hill-climbing search might get lost on the plateau.
Hill Climbing Search Gets stuck due to: Local maxima – goes towards peak, but stops before getting to global maximum (i.e., a local minimum for the cost h). e.g., state in Figure 4.3(b). Every move of a single queen makes the situation worse. Plateaux– flat areas (a flat local maximum) where there is no uphill exit, just wander around. Could be a shoulder or "mesa". A hill-climbing search might get lost on the plateau. Ridges – result in a sequence of local maxima (difficult for greedy algorithm to navigate).
Figure 4.4 Ridges cause difficulties for hill climbing: The grid of states (dark circles) is superimposed on a ridge rising from left to right, creating a sequence of local maxima that are not directly connected to each other. From each local maximum, all the available actions point downhill.
Hill Climbing (Greedy Local Search) • Given HC algorithm stops when reaches a plateau. • Can try sideways moves – moves to states with same value – in hopes that a plateau is really a shoulder. • Must be careful not to allow infinite loop if on a real plateau. • Common to limit number of consecutive sideways moves. • E.g., 100 consecutive sideways moves for 8-queens. • Solved goes up to 94%, but average number of steps is 21 for solutions and 64 for failures.