400 likes | 413 Views
Artificial Intelligence. Games. Games and Computers. Games offer concrete or abstract competitions “I’m better than you!” Some games are amenable to computer treatment mostly mental activities well-formulated rules and operators accessible state Others are not
E N D
Artificial Intelligence Games
Games and Computers • Games offer concrete or abstract competitions • “I’m better than you!” • Some games are amenable to computer treatment • mostly mental activities • well-formulated rules and operators • accessible state • Others are not • emphasis on physical activities • rules and operators open to interpretation • state not (easily or fully) accessible
Defining Computer Games • Video game / computer game • “a mental contest, played with a computer according to certain rules for amusement, recreation, or winning a stake” [Zyda, 2005] • Serious game • a mental contest, played with a computer in accordance with specific rules, that uses entertainment to further training or education [modified from Zyda, 2005] [Zyda, 2005] Michael Zyda, “From Visual Simulation to Virtual Reality to Games,” IEEE Computer, vol. 39, no. 9, pp. 25-32.
What are and why study games? • Games are a form of multi-agent environment • What do other agents do and how do they affect our success? • Cooperative vs. competitive multi-agent environments. • Competitive multi-agent environments give rise to adversarial search a.k.a. games • Why study games? • Fun • Interesting subject of study because they are hard
Relation of Games to Search • Search – no adversary • Solution is (heuristic) method for finding goal • Heuristics and CSP techniques can find optimal solution • Evaluation function: estimate of cost from start to goal through given nodes • Examples: path planning, scheduling activities • Games – adversary • Solution is strategy (strategy specifies move for every possible opponent reply). • Time limits force an approximate solution • Evaluation function: evaluate “goodness” of game position • Examples: chess, checkers, Othello, backgammon
Aspects of Computer Games • Story • defines the context and content of the game • Art • presentation of the game to the user • emphasis on visual display, sound • Software • implementation of the game on a computer • Purpose • entertainment • training • education
Game Analysis • Often deterministic • the outcome of actions is known • sometimes an element of chance is part of the game • e.g. dice • Two-player, turn-taking • one move for each player • Zero-sum utility function • what one player wins, the other must lose • Often perfect information • fully observable, everything is known to both players about the state of the environment (game) • not for all games • e.g. card games with “private” or “hidden” cards
Games as Adversarial Search • Many games can be formulated as search problems • The zero-sum utility function leads to an adversarial situation • in order for one agent to win, the other necessarily has to lose • Factors complicating the search task • potentially huge search spaces • elements of chance • multi-person games, teams • time limits
Single-Person Game • Conventional search problem • identify a sequence of moves that leads to a winning state • examples: Solitaire and Rubik’s cube • little attention in AI • Some games can be quite challenging • some versions of solitaire
Contingency Problem • Uncertainty due to the moves and motivations of the opponent • tries to make the game as difficult as possible for the player • attempts to maximize its own, and thus minimize the player’s utility function value
Two-Person Game • Games with two opposing players • often called MIN and MAX • usually MAX moves first, then they take turns • MAX wants a strategy to find a winning state • no matter what MIN does • MIN does the same • or at least tries to prevent MAX from winning • Perfect Decisions • traverse all relevant parts of the search tree • often impractical because of time and space limitations
Example Trivial Game • Deal four playing cards out, face up • Player 1 chooses one, player 2 chooses one • Player 1 chooses another, player 2 chooses another • And the winner is…. • Add the cards up • The player with the highest even number • Scores that amount
Moving a score when there’s a choice • Use minimax assumption
MiniMax Strategy • Optimal strategy for MAX • generate the whole game tree • calculate the value of each terminal state • starting from the leaf nodes up to the root • MAX selects the value with the highest node • MAX assumes that MIN in its move will select the node that minimizes the value • assumes that both players play optimally
MiniMax Properties • Based on depth-first • recursive implementation • Time complexity is O(bm) • exponentialin the number of moves • Spacecomplexity is O(bm) • where b is the branching factor, m the maximum depth of the search tree
MiniMax Example terminal nodes: values calculated from the utility function 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
MiniMax Example other nodes: values calculated via minimax algorithm 4 7 6 2 6 3 4 5 1 2 5 4 1 2 6 3 4 3 Min 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
MiniMax Example 7 6 5 5 6 4 Max 4 7 6 2 6 3 4 5 1 2 5 4 1 2 6 3 4 3 Min 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
MiniMax Example 5 3 4 Min 7 6 5 5 6 4 Max 4 7 6 2 6 3 4 5 1 2 5 4 1 2 6 3 4 3 Min 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
MiniMax Example 5 Max 5 3 4 Min 7 6 5 5 6 4 Max 4 7 6 2 6 3 4 5 1 2 5 4 1 2 6 3 4 3 Min 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
MiniMax Example moves by Max and countermoves by Min 5 Max 5 3 4 Min 7 6 5 5 6 4 Max 4 7 6 2 6 3 4 5 1 2 5 4 1 2 6 3 4 3 Min 4 7 9 6 9 8 8 5 6 7 5 2 3 2 5 4 9 3
Imperfect Decisions • Complete search is impractical for most games • Alternative: search the tree only to a certain depth • requires a cutoff-test to determine where to stop • uses a heuristics-based evaluation function to estimate the expected utility of the game from the leave nodes
Pruning • Discards parts of the search tree • guaranteed not to contain good moves • guarantee that the solution is not in that branch or sub-tree • if both players make optimal decisions, they will never end up in that part of the search tree • Results in substantial time and space savings • as a consequence, longer sequences of moves can be explored
Alpha-Beta Pruning • Certain moves are not considered • won’t result in a better evaluation value • Applies to moves by both players • a indicates the best choice for Max so far never decreases • b indicates the best choice for Min so far never increases • Extension of the minimax approach • results in the same move as minimax, but with less overhead • prunes uninteresting parts of the search tree
Alpha-Beta Example 1 [-∞, +∞] • we assume a depth-first, left-to-right search as basic strategy • the range of the possible values for each node are indicated • initially [-∞, +∞] 5 Max [-∞, +∞] Min • a best choice for Max ? • b best choice for Min ?
Alpha-Beta Example 2 [-∞, +∞] • Min obtains the first value from a successor node 5 Max [-∞, 7] Min 7 • a best choice for Max ? • b best choice for Min 7
Alpha-Beta Example 3 [-∞, +∞] • Min obtains the second value from a successor node 5 Max [-∞, 6] Min 7 6 • a best choice for Max ? • b best choice for Min 6
Alpha-Beta Example 4 [5, +∞] • Min obtains the third value from a successor node • this is the last value from this sub-tree, and the exact value is known • Max now has a value for its first successor node, but hopes that something better might still come 5 Max 5 Min 7 6 5 • a best choice for Max 5 • b best choice for Min 5
Alpha-Beta Example 5 [5, +∞] • Min continues with the next sub-tree, and gets a better value • Max has a better choice from its perspective, however, and will not consider a move in the sub-tree currently explored by Min • initially [-∞, +∞] 5 Max 5 [-∞, 3] Min 7 6 5 3 • a best choice for Max 5 • b best choice for Min 3
Alpha-Beta Example 6 [5, +∞] • Min knows that Max won’t consider a move to this sub-tree, and abandons it • this is a case of pruning, indicated by 5 Max 5 [-∞, 3] Min 7 6 5 3 • a best choice for Max 5 • b best choice for Min 3
Alpha-Beta Example 7 [5, +∞] • Min explores the next sub-tree, and finds a value that is worse than the other nodes at this level • if Min is not able to find something lower, then Max will choose this branch, so Min must explore more successor nodes 5 Max 5 [-∞, 3] [-∞, 6] Min 7 6 5 3 6 • a best choice for Max 5 • b best choice for Min 3
Alpha-Beta Example 8 [5, +∞] • Min is lucky, and finds a value that is the same as the current worst value at this level • Max can choose this branch, or the other branch with the same value 5 Max 5 [-∞, 3] [-∞, 5] Min 7 6 5 3 6 5 • a best choice for Max 5 • b best choice for Min 3
Alpha-Beta Example 9 5 • Min could continue searching this sub-tree to see if there is a value that is less than the current worst alternative in order to give Max as few choices as possible • this depends on the specific implementation • Max knows the best value for its sub-tree 5 Max 5 [-∞, 3] [-∞, 5] Min 7 6 5 3 6 5 • a best choice for Max 5 • b best choice for Min 3
7 6 5 3 6 6 5 4 4 Alpha-Beta Example Overview • some branches can be pruned because they would never be considered • after looking at one branch, Max already knows that they will not be of interest since Min would choose a value that is less than what Max already has at its disposal 5 5 Max 5 <= 3 <=5 Min • a best choice for Max 5 • b best choice for Min 7 -> 6 -> 5 -> 3