360 likes | 461 Views
Game-playing AIs Part 2. CIS 391 Fall 2007. Games: Outline of Unit. Part II The Minimax Rule Alpha-Beta Pruning Game-playing AI successes. `Don’t play hope chess’: The Minimax Rule.
E N D
Game-playing AIsPart 2 CIS 391 Fall 2007
Games: Outline of Unit Part II • The Minimax Rule • Alpha-Beta Pruning • Game-playing AI successes
`Don’t play hope chess’: The Minimax Rule • Idea: make the move for player MAX which has the most benefit assuming that MIN makes the best move for MIN in response • This is computed by a recursive process • The backed-up value of each node in the tree is determined by the values of its children • For a MAX node, the backed-up value is the maximum of the values of its children (i.e. the best for MAX) • For a MIN node, the backed-up value is the minimum of the values of its children (i.e. the best for MIN)
The Minimax Procedure • Start with the current position as a MAX node. • Expand the game tree a fixed number of ply (half-moves). • Apply the evaluation function to the leaf positions. • Calculate back-up up values bottom-up. • Pick the move which was chosen to give the MAX value at the root.
2 1 2 2 1 2 7 1 8 2 2 2 7 7 1 1 8 8 2 1 MAX MIN 2 7 1 8 2-ply Minimax Example This is the move selected by minimax Evaluation function value
What if MIN does not play optimally? • Definition of optimal play for MAX assumes MIN plays optimally: • maximizes worst-case outcome for MAX. • But if MIN does not play optimally, MAX will do even better. [Theorem-not hard to prove]
Minimax Algorithm function MINIMAX-DECISION(state) returns an action inputs: state, current state in game vMAX-VALUE(state) return the action in SUCCESSORS(state) with value v function MAX-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v -∞ for a,s in SUCCESSORS(state) do v MAX(v, MIN-VALUE(s) ) return v function MIN-VALUE(state) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v ∞ for a,s in SUCCESSORS(state) do v MIN(v, MAX-VALUE(s) ) return v
Comments on Minimax Search • Depth-first search with fixed number of ply as the limit. • O(bm) time complexity – Ooops! • O(bm) space complexity • Performance will depend on • the quality of the evaluation function (expert knowledge) • depth of search (computing power and search algorithm) • Differences from normal state space search • Looking for one move only • No cost on arcs • MAX can’t be sure how MIN will respond to his moves • Minimax rule forms the basis for other game tree search algorithms.
Alpha-Beta Pruning Slides of example from screenshots by Mikael Bodén, Halmstad University, Sweden found at http://www.emunix.emich.edu/~evett/AI/AlphaBeta_movie/sld001.htm
Alpha-Beta Pruning • A way to improve the performance of the Minimax Procedure • Basic idea: “If you have an idea which is surely bad, don’t take the time to see how truly awful it is” ~ Pat Winston >=2 • We don’t need to compute the value at this node. • No matter what it is it can’t effect the value of the root node. =2 <=1 2 7 1 ?
The algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further.
An alpha value is an initial or temporary value associated with a MAX node. Because MAX nodes are given the maximum value among their children, an alpha value can never decrease; it can only go up. A beta value is an initial or temporary value associated with a MIN node. Because MIN nodes are given the minimum value among their children, a beta value can never increase; it can only go down.
http://www.maths.nottingham.ac.uk/personal/anw/G13GAM/alphabet.htmlhttp://www.maths.nottingham.ac.uk/personal/anw/G13GAM/alphabet.html
Alpha-Beta Pruning • Traverse the search tree in depth-first order • For each MAX node n, α(n)=maximum child value found so far • Starts with – • Increases if a child returns a value greater than the current α(n) • Lower-bound on the final value • For each MIN node n, β(n)=minimum child value found so far • Starts with + • Decreases if a child returns a value less than the current β(n) • Upper-bound on the final value • MAX cutoff rule: At a MAX node n, cut off search if α(n)>=β(n) • MIN cutoff rule: At a MIN node n, cut off search if β(n)<=α(n) • Carry α and β values down in search
Alpha-Beta Algorithm I function ALPHA-BETA-SEARCH(state) returns an action inputs: state, current state in game vMAX-VALUE(state, - ∞ , +∞) return the action in SUCCESSORS(state) with value v function MAX-VALUE(state, , ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v - ∞ for a,s in SUCCESSORS(state) do v MAX(v,MIN-VALUE(s, , )) ifv ≥ then returnv MAX( ,v) return v
Alpha-Beta Algorithm II function MIN-VALUE(state, , ) returns a utility value if TERMINAL-TEST(state) then return UTILITY(state) v + ∞ for a,s in SUCCESSORS(state) do v MIN(v,MAX-VALUE(s, , )) ifv ≤ then returnv MIN( ,v) return v
Effectiveness of Alpha-Beta Pruning • Guaranteed to compute same root value as Minimax • Worst case: no pruning, same as Minimax (O(bd)) • Best case: when each player’s best move is the first option examined, you examine only O(bd/2) nodes, allowing you to search twice as deep! • For Deep Blue, alpha-beta pruning reduced the average branching factor from 35-40 to 6.
Chinook and Deep Blue • Chinook • the World Man-Made Checkers Champion, developed at the University of Alberta. • Competed in human tournaments, earning the right to play for the human world championship, and defeated the best players in the world. • Play Chinook at http://www.cs.ualberta.ca/~chinook • Deep Blue • Defeated world champion Gary Kasparov 3.5-2.5 in 1997 after losing 4-2 in 1996. • Uses a parallel array of 256 special chess-specific processors • Evaluates 200 billion moves every 3 minutes; 12-ply search depth • Expert knowledge from an international grandmaster. • 8000 factor evaluation function tuned from hundreds of thousands of grandmaster games • Tends to play for tiny positional advantages.