1 / 14

Brute Force Search

Brute Force Search. Depth-first or Breadth-first search Do not apply any heuristics to aid in solving the problem Therefore, these techniques are considering blind forms of search Usually very inefficient (intractable) Used when there is no knowledge to apply

bmarti
Download Presentation

Brute Force Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brute Force Search • Depth-first or Breadth-first search • Do not apply any heuristics to aid in solving the problem • Therefore, these techniques are considering blind forms of search • Usually very inefficient (intractable) • Used when there is no knowledge to apply • 8 queens problem, traveling salesman problem • In this chapter, we will consider heuristic forms of search and look at several examples

  2. Heuristic Search • Heuristic - a rule of thumb used to help guide search • Heuristic Function - function applied to a state in a search space to indicate a likelihood of success if that state is achieved • Best-first search – a variation on brute force search where a heuristic is used to guide the search • Heuristic search can be powerful but very also general • We examine several forms of heuristic search here

  3. Weak Methods: • Heuristic search methods known as “weak methods” because of their generality and because they do not apply a great deal of knowledge • They all attempt to reduce the amount of search required to solve the problem, try to make the problem tractable • We will consider: • Generate-and-test • Hill Climbing and variations • Best-first Search and variations • Constraint Satisfaction • Means-End Analysis

  4. Generate-and-Test • Generate a possible solution • A path through the search space, or a single state in the case of interpretation or diagnosis • Test to see if solution reaches a goal state • If so, quit, otherwise repeat • Obviously, this is not a very useful method – it relies on randomness • Was used as part of Dendral’s process for generating chemical analyses of a mass spectrogram reading – but Dendral also used constraint satisfaction and user feedback • We will concentrate on more reasonable methods that apply heuristics

  5. A Heuristic Function • Consider the 8-puzzle as an example problem • We want to select the best move to make next • What move is best? We might compare the current state to the goal state and see how they differ. Now look at each possible move and determine which one takes us closest to the goal state Goal: Current: Moves: Move 7 down Move 6 right Move 8 left • 2 3 • 5 6 • 7 8 4 2 3 5 7 1 6 8

  6. Hill Climbing • Visualize the choices as an 2-dimensional space, apply a heuristic and seek the state that takes you uphill the maximum amount • In actuality, many problems will be viewed in more than 2 dimensions • In 3-dimensions, the heuristic worth represents “height” • To solve a problem, pick a next state that moves you “uphill” • Examples: • Simple Hill Climbing • Steepest Ascent Hill Climbing • Simulated Annealing

  7. Simple Hill Climbing • Given initial state perform the following • Set initial state to current • Loop on the following until goal is found or no more operators available • Select an operator and apply it to create a new state • Evaluate new state • If new state is better than current state, perform operator making new state the current state • Once loop is exited, either we will have found the goal • This algorithm only tries to improve during each selection, but not find the best solution

  8. Steepest Ascent Hill Climbing • Here, we attempt to improve on the previous Hill Climbing algorithm • Given initial state perform the following • Set initial state to current • Loop on the following until goal is found or a complete iteration occurs without change to current state • Generate all successor states to current state • Evaluate all successor states using heuristic • Select the successor state that yields the highest heuristic value and perform that operator • Notice that this algorithm can lead us to a state that has no better move, this is called a local maximum (other phenomenon are plateaus and ridges)

  9. Blocks World Heuristic: add 1 point for every block that is resting on the thing it is supposed to be resting on and subtract 1 point for every block that is sitting on the wrong thing This is a local heuristic, it only considers a block in isolation, not the substructure Better heuristic: for each block that has the correct substructure, add 1 point for every block in the substructure, and for each block that has an incorrect substructure, subtract 1 point for every block in the substructure 8 Puzzle Heuristic: add 1 point for each tile that is in its proper location and subtract 1 point for each tile in its wrong location Better heuristic: add 1 point for each tile that is in its proper location and subtract 1 point for each move that it would take to move a tile from its improper location to its proper location Examples

  10. Simulated Annealing • A variation where some downhill moves can be made early on in the search • The idea is that early in the search, we haven’t invested much yet, so we can make some downhill moves • In the 8 puzzle, we have to be willing to “mess up” part of the solution to move other tiles into better positions • The heuristic worth of each state is multiplied by a probability and the probability becomes more stable as time goes on (see the formula on page 70) • Simulated annealing is actually applied to neural networks

  11. Best-first search • One problem with hill climbing is that you are throwing out old states when you move uphill and yet some of those old states may wind up being better than a few uphill moves • Algorithm uses two sets, open nodes (which can be selected) and closed nodes (already selected) • Start with Open containing the initial state • While current <> goal and there are nodes left in Open do • Set current = best node in Open and move current to Closed • Generate current’s successors • Add successors to Open if they are not already in Open or Closed A (5) B (4) C (3) D (6) G (6) H (4) E (2) F (3) I (3) J (8)

  12. Variations of Best-first • A* Algorithm – add to the heuristic the cost of getting to that node • For instance, if a solution takes 20 steps and another takes 10 steps, even though the 10-step solution may not be as good, it takes less effort to get there • Problem-Reduction – use AND/OR graphs where some steps allow choices and others require combined steps • AO* Algorithm – A* variation for AND/OR graphs • Alpha-Beta Pruning – use a threshold to remove any possible steps that look too poor to consider • Agendas – Evaluating different AND/OR paths using different heuristics, the agenda is a list of tasks that can be applied to a given state in the search space

  13. Constraint Satisfaction • Many branches of a search space can be ruled out due to constraints • Constraint satisfaction is a form of best-first search where constraints are applied to eliminate branches • Consider the Cryptorithmetic problem, we can rule out several possibilities for some of the letters • After making a decision, propagate any new constraints that come into existence • Constraint Satisfaction can also be applied to planning where a certain partial plan may exceed specified constraints and so can be eliminated SEND + MORE MONEY M = 1  S = 8 or 9  O = 0 or 1  O = 0 …

  14. Means-Ends Analysis • Compare the current state to the goal state • Pick the operator which moves the problem most towards the goal state • Repeat until the goal state has been reached • Forward and backward chaining may be used • Subgoaling required • Generating intermediate states or steps • Means-Ends is often used in planning problems, but can be applied in other situations too such as math theorem proving • Means-ends analysis is much like top-down design in programming

More Related