1 / 57

Problem Solving Using Search

Learn how to reduce problems to graphs and navigate through nodes to find solutions. Understand different search algorithms and problem representation in the context of graph theory.

Download Presentation

Problem Solving Using Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reduce a problem to one of searching a graph. View problem solving as a process of moving through a sequence of problem states to reach a goal state Move from one state to another by taking an action A sequence of actions and states leading to a goal state is a solution to the problem. Problem Solving Using Search

  2. A tree is made up of nodes and links connected so that there are no loops (cycles). Nodes are sometimes called vertices. Links are sometimes called edges. A tree has a root node. Where the tree ”starts”. Every node except the root has a single parent (aka direct ancestor). An ancestor node is a node that can be reached by repeatedly going to a parent. Each node (except a terminal, aka leaf) has one or more children (aka direct descendants). A descendant node is a node that can be reached by repeatedly going to a child. Trees

  3. Set of nodes connected by links. But, unlike trees, loops are allowed. Also, unlike trees, multiple parents are allowed. Two kinds of graphs: Directed graphs. Links have a direction. Undirected graphs. Links have no direction. A tree is a special case of a graph. Graphs

  4. Nodes represent cities that are connected by direct flight. Find route from city A to city B that involves the fewest hops. Nodes represent a state of the world. Which blocks are on top of what in a blocks scene. The links represent actions that result in a change from one state to the other. A path through the graph represents a plan of action. A sequence of steps that tell how to get from an initial state to a goal state. Representing Problems with Graphs

  5. Assume that each state is complete. Represents all (and preferably only) relevant aspects of the problem to be solved. In the flight planning problem, the identity of the airport is sufficient. But the address of the airport is not necessary. Assume that actions are deterministic. We know exactly the state after an action has been taken. Assume that actions are discrete. We don’t have to represent what happens while the action is happening. We assume that a flight gets us to the scheduled destination without caring what happens during the flight. Problem Solving with Graphs

  6. Uninformed, Any-path Depth-first Breadth-first Look at all nodes in a search tree in a specific order independent of the goal. Stop when the first path to a goal state is found. Informed, Any-path Exploit a task specific measure of goodness to try to reach a goal state more quickly. Classes of Search

  7. Uninformed, optimal Guaranteed to find the ”best” path As measured by the sum of weights on the graph edges Does not use any information beyond what is in the graph definition Informed, optimal Guaranteed to find the best path Exploit heuristic (”rule of thumb”) information to find the path faster than uninformed methods Classes of Search

  8. Assigns a numerical value to a board position The set of pieces and their locations represents a singel state in the game Represents the likelihood of winning from a given board position Typical scoring function is linear A weighted sum of features of the board position Each feature is a number that measures a specific characteristic of the position. ”Material” is some measure of which pieces one has in a given position. A number that represents the distribution of the pieces in a position. Scoring Function

  9. To determine next move: Compute score for all possible next positions. Select the one with the highest score. If we had a perfect evaluation function, playing chess would be easy! Such a function exists in principle But, nobody knows how to write it or compute it directly. Scoring Function

  10. Limited look-ahead plus scoring I look ahead two moves (2-ply) First me – relative level 1 Then you – relative level 2 For each group of children at level 2 Check to see which has the minimum score Assign that number to the parent Represents the worst that can happen to me after your move from that parent position I pick the move that lands me in the position where you can do the least damage to me. This is the position which has the maximum value resulting from applying Step 1. Can implement this to any number (depth) of min-max level pairs. Min-Max Algorithm

  11. Pure optimization of min-max. No tradeoffs or approximations. Don’t examine more states than is necessary. ”Cutoff” moves allow us to cut off entire branches of the search tree (see following example) Only 3 states need to be examined in the following examle Turns out, in general, to be very effective Alpha-Beta Pruning

  12. Assumption of ordered tree is optimistic. ”Ordered” means to have the best move on the left in any set of child nodes. Node with lowest value for a min node. Node with highest value for a max node. If we could order nodes perfectly, we would notneed alpha-beta search! The good news is that in practice performance is close to optimistic limit. Move Generation

  13. Goal is to produce ordered moves Encodes a fair bit of knowledge about a game. Example order heuristics: Value of captured piece – value of attacker. E.g., ”pawn takes Queen” is the highest ranked move in this ordering Killer Heuristic Keep track of cutoff moves at each level of search Try those first when considering subsequent moves at the same level Based on idea that many moves are inconsequential E.g., if your queen is en prise, it doesn't matter whether you advance your pawn at H2 by one or two squares The opponent will still take the queen. Therefore, if the move "bishop takes queen" has caused a cutoff during the examination of move H2-H3, it might also cause one during the examination of H2-H4, and should be tried first. Move Generator

  14. Other place where substantial game knowledge is encoded In early programs, evaluation functions were complicated and buggy In time it was discovered that you could get better results by A simple reliable evaluator E.g., a weighted count of pieces on the board. Plus deeper search Static Evaluation

  15. Deep Blue used static evaluation functions of medium complexity Implemented in hardware ”Cheap” PC programs rely on quite complex evaluation functions. Can’t search as deeply as Big Blue In general there is a tradeoff between Complexity of evaluation function Depth of search. Static Evalution

  16. Neural network that is able to teach itself to play backgammon solely by playing against itself and learning from the results Based on the TD(Lambda) reinforcment learning algorithm Starts from random initial weights (and hence random initial strategy) With zero knowledge built in at the start of learning (i.e. given only a "raw" description of the board state), the network learns to play at a strong intermediate level When a set of hand crafted features is added to the network's input representation, the result is a truly staggering level of performance The latest version of TD Gammon is now estimated to play at a strong master level that is extremely close to the world's best human players. TD-Gammon

More Related