1 / 30

Adversarial Search

f1(s) = (number of white queens) (number of black queens), etc. Other features which ... Othello: human champions refuse to compete against computers, who are ...

Kelvin_Ajay
Download Presentation

Adversarial Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adversarial Search(Games) Chapter 6

  2. Outline • Summary of last lectures • Characterizing a Game • Optimal decisions • Why is full exploration of the search space not feasible? • The minimax algorithm • α-β pruning • Imperfect, real-time decisions • Extensions: multi-player, chance

  3. Summary of the past lectures • System engineering process • Analysis, design, implementation • Agent’s Performance measures (non-functional requirements) • Agents types • Simple-reflex, model-based, goal-based, utility agents, learning agents • Environment types • Static/dynamic, deterministic/stochastic, fully/partially observable

  4. Summary of the past lectures • Search (Goal-based agents) • Basic search algorithms and their variants • Uninformed search strategies • Limited information about the environment model • Iterative Deepening search, bidirectional search, avoiding repeated states • Informed search • Improve time and space complexity by having additional information about the environment for search • Heuristic function • Greedy best-first search • A* search, triangular inequality

  5. Games vs. search problems • "Unpredictable" opponent  specifying a move for every possible opponent reply • Time limits  unlikely to find goal, must approximate

  6. 2-player zero-sum discrete finite deterministic games of perfect information What does it means? • Two player: :-) • Zero-sum: In any outcome of any game, Player A’s gains equal player B’s losses. • Discrete: All game states and decisions are discrete values. • Finite: Only a finite number of states and decisions. • Deterministic: No chance (no die rolls). • Perfect information: Both players can see the state, and each decision is made sequentially (no simultaneous moves). • Games: See next slide

  7. 2-player zero-sum discrete finite deterministic games of perfect information

  8. 2-player zero-sum discrete finite deterministic games of perfect information Hidden Information Stochastic Not Finite One Player Involves Animal Behave  Mutiplayer

  9. 2-player zero-sum discrete finite deterministic games of perfect information A Two-player zero-sum discrete finite deterministic game of perfect information is a quintuplet: ( S , I , N , T , V ) where:

  10. Game tree (2-player, deterministic, turns)

  11. Minimax Algorithm • Optimal play for deterministic games • Idea: choose move to position with highest minimax value = best achievable payoff against best play • E.g., a simple 2-ply game:

  12. Utility of a situation in a game: • In most two-player games the termination situations have a certain value, mostly +1 for MAX (=win) -1 for MIN (=loose) 0 for a draw. • Also different values possible: e.g., Backgammon (-192 to +192), etc. • We can compute in any situation the minimax-value as follows:

  13. Minimax Algorithm

  14. Properties of minimax • Complete? Yes (if tree is finite) • Optimal? Yes (against an optimal opponent) • Time complexity? O(bm) • Space complexity? O(bm) (depth-first exploration) • Problem: explores the whole search-space For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible • So, how to proceed? b … branching factor m … maximum number of moves

  15. Motivation for α-β pruning • The problem with minimax algorithm search is that the number of game states it has to examine is exponential in the number of moves: • α-β proposes to compute the correct minimax algorithm decision without looking at every node in the game tree.  PRUNING!

  16. α-β pruning example

  17. α-β pruning example

  18. · 5 · 2 5 2 Pruning possible! α-β pruning example No pruning We see: possibility to prune depends on the order of processing the successors!

  19. Properties of α-β • Pruning does not affect final result • Good move ordering improves effectiveness of pruning • With "perfect ordering," time complexity = O(bm/2) doubles possible depth of search doable in the same time • A simple example of the value of reasoning about which computations are relevant (a form of meta-reasoning)

  20. Why is it called α-β? • α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max • If v is worse than α, max will avoid it  prune that branch • Define β similarly for min

  21. The α-β algorithm

  22. The α-β algorithm

  23. Resource limits Suppose we have 100 secs, explore 104 nodes/sec106nodes per move  even with pruning not possible to explore the whole search space e.g. for chess! Standard approach: • cutoff test: e.g., depth limit (perhaps add quiescence search) • evaluation function = estimated desirability of position

  24. Evaluation functions • For chess, typically linear weighted sum of features Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s) • e.g., weight of figures on the board: w1 = 9 with f1(s) = (number of white queens) – (number of black queens), etc. … Other features which could be taken into account: number of threats, good structure of pawns, measure of safety of the king.

  25. Cutting off search MinimaxCutoff is identical to MinimaxValue except • Terminal? is replaced by Cutoff? • Utility is replaced by Eval Does it work in practice? bm = 106, b=35  m=4 4-ply lookahead is a hopeless chess player! • 4-ply ≈ human novice • 8-ply ≈ typical PC, human master • 12-ply ≈ Deep Blue, Kasparov

  26. Deterministic games in practice • Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions. • Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. • Othello: human champions refuse to compete against computers, who are too good. • Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves.

  27. Some extensions • What if more than two players are in the game? 2-player algorithms (minimax, -, cutoff-eval) can be extended to multi-player in a straightforward way: • Instead of 1 value use a vector of values, where each player tries to maximize its own index-value in the vector • 2-player-zero-sum games are a special case of this, where the vector can be combined into one value since the values for both players are exactly opposite • What if an element of chance (i.e. non-determinism) is added? E.g. rolling dice in Backgammon? Expectiminimax  next slide

  28. Minimax with Chance Nodes: Chance nodes have certain probabibilities.

  29. EXPECTIMINIMAX… • Slight variation of MINIMAX: where P(s) is the probability of reaching s (e.g. probability of rolling a certain number with the dice)

  30. Summary: • Games are fun to work on! • They illustrate several important points about AI • perfection is unattainable  must approximate • good idea to think about what to think about: ideas and expertise of masters deployed in evaluation functions (i.e. heuristics) What makes Game theory interesting in practice? • Exogenous events, i.e. non-determinism in planning can be modelled as opponent. • Multi-agent planning: cooperative vs. competitive  Can be modeled as multi-player games

More Related