360 likes | 475 Views
Pruned Search Strategies CS344 : AI - Seminar 20 th January 2011. TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof. Pushpak Bhattacharya. Outline. Two player games Game trees MiniMax algorithm α-β pruning A demonstration : Chess
E N D
Pruned Search StrategiesCS344 : AI - Seminar20th January 2011 TL Nishant Totla, RM Pritish Kamath M1 Garvit Juniwal, M2 Vivek Madan guided by Prof. Pushpak Bhattacharya
Outline Two player games Game trees MiniMax algorithm α-β pruning A demonstration : Chess Iterative Deepening A*
A Brief History Computer considers possible lines of play (Babbage, 1846) Minimax theorem (von Neumann, 1928) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952–57) Pruning to allow deeper search (McCarthy, 1956) Deep Blue wins 6-game chess match against Kasparov (Hsu et al, 1997) Checkers solved (Schaeffer et al, 2007)
Two player games The game is played by two players, who take alternate turns to change the state of the game. The game has a starting state S. The game ends when a player does not have a legal move. Both players end up with a score at an end state.
Classification of 2-player games Sequential : players move one-at-a-time Zero-Sum game : sum of scores assigned to the players at any end state equals 0.
Game Tree A move changes the state of the game. This naturally induces a graph with the set of states as the vertices, and moves represented by the edges. A game tree is a graphical representation of a finite, sequential, deterministic, perfect-information game.
Strategy for 2-player games? How does one go about playing 2-player games? Choose a move. Look at all possibile moves that the opponent can play. Choose a move for each of the opponent's possible move and so on... Consider an instance of Tic-Tac-Toe game played between Max and Min. The following two images describe strategies for each player.
Best Strategy?The MiniMax Algorithm -- taken from Wikipedia (http://en.wikipedia.org/wiki/Minimax)
But what about the heuristics? The heuristics used at the leaves of the MiniMax tree depend on the rules of the game and our understanding of the same. A heuristic is an objective way to quantify the “goodness” of a particular state. For example, in chess you can use the weighted sum of pieces remaining on the board.
Properties of the minimax algorithm Space complexity: O(bh), where b is the average fanout, and h is the maximum search depth. Time complexity: O(bh) For chess, b≈35, h≈100 for 'reasonable' games. 35100≈10135nodes. This is about 1055times the number of particles in the Universe (about 1087) ⇨ no way to examine every node! But do we really need to examine every node? Let's now see an improved idea.
Improvements? α-β Pruning : “Stop exploring unfavourable moves if you have already found a more favourable one.”
α-β pruning (execution) - taken from Wikipedia (http://en.wikipedia.org/wiki/Alpha-beta_pruning)
Resource Limits Even after pruning, Chess has too large a state space, which hence search depths are restricted. Fact: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.
Demonstration! We shall now demonstrate a chess program that uses the Minimax algorithm with α-β pruning. The code is written in Scheme (functional programming language). After this, we move to a different pruned search strategy for general graphs.
Search Strategies Two types of search algorithms: Brute force (breadth-first, depth-first, etc.) Heuristic (A*, heuristic depth-first, etc.)
Definitions Node branching factor (b):Maximum fan-out of of the nodes of the search tree. Depth (d):Length of the shortest path from the initial state to a goal state Maximum depth(m):Maximum depth of the tree.
Iterative Deepening DFS Source: http://homepages.ius.edu/rwisman/C463/html/Chapter3.htm
Iterative Deepening DFS Source: http://homepages.ius.edu/rwisman/C463/html/Chapter3.html
IDA* Algorithm IDA*, like depth-first search, except based on increasing values of total cost (f=g+h) rather than increasing depths. At each iteration perform a depth first search cutting off a branch when its total cost exceeds a given threshold. Threshold is initially set to h(start). Threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold. IDA* always finds a cheapest solution if the heuristic is admissible.
Monotonocity For any admissible cost function f, we can construct a monotone admissible function f ' which is at least as informed as f. We restrict our attention to cost functions which are monotonically non-decreasing along any path in the problem space, without loss of generality.
Correctness Since the cost cutoff for each succeeding iteration is the minimum value which exceeded the previous cutoff, no paths can have a cost which lies in a gap between two successive cutoffs. IDA* examines nodes in order of increasing f-cost. Hence, IDA* finds the optimal path. Source: http://reference.kfupm.edu.sa/content/d/e/depth_first_iterative_deepening__an_opti_93341.pdf
Why IDA* over A*? Uses far less space than A* Expands asymptotically, the same number of nodes as A* in a tree search. Simpler to implement since there are no open or closed lists to be managed.
Optimality Given an admissible monotone heuristic with constant relative error, then IDA* is optimal in terms of solution cost, time, and space, over the class of admissible best-first searches on a tree.
An Empirical Test Both IDA* and A* were implemented for the Fifteen Puzzle. Manhattan distance heuristic. A* couldn't solve most cases. It ran out of space. IDA* generated more nodes than A*, still ran faster than A*, due to less overhead per node. Also refer: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.8560
Application to Game Trees We want to maximize search depth subject to fixed time and space constraints. Since IDA* minimizes, at least asymptotically, time and space for any given search depth, it maximizes the depth of search possible for any fixed time and space restrictions as well.
References list • www.cs.umd.edu/~nau/cmsc828n/game-tree-search.pdf • http://reference.kfupm.edu.sa/content/d/e/depth_first_iterative_deepening__an_opti_93341.pdf • http://www.cs.nott.ac.uk/~ajp/courses/g51iai/004heuristicsearches/intro-to-iterative-deepening.ppt • http://www.cs.nott.ac.uk/~ajp/courses/g51iai/003blindsearches/ids.ppt • http://homepages.ius.edu/rwisman/C463/html/Chapter3.htm