1 / 24

Introduction to Game Playing

Introduction to Game Playing. Lecture 11 By Zahid Anwar. Strategies and Simplification Techniques for Resolution.

Download Presentation

Introduction to Game Playing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Game Playing Lecture 11 By Zahid Anwar

  2. Strategies and Simplification Techniques for Resolution • If the choice of clauses to resolve together at each step is made in certain systematic ways, then the resolution procedure will find a contradiction if one exists. However it may take very long time • There exist strategies for making the choice that can speed up the process considerably

  3. Strategies • Only resolve pairs of clauses that contain complementary literals, since only such resolutions produce new clauses that are harder to satisfy than their parents. • Eliminate certain clauses as soon as they are generated so that they can not participate in later resolutions. Two kinds of clauses should be eliminated: tautologies(which can never be unsatisfied) and clauses that are subsumed by other clauses (are easier to satisfy)e.g PVQ is subsumed by P.

  4. Strategies • Whenever possible resolve either with one of the clauses that is part of the statement we are trying to refute or with a clause generated by a resolution with such a clause. This is called the set-of-support strategy and corresponds to the intuition that the contradiction we are looking for must involve the statement we are trying to prove • Whenever possible, resolve with clauses that have a single literal. Such resolutions generate new clauses with fewer literals than the larger of their parent clauses and thus are probably closer to the goal of a resolvent with zero terms. This method is called unit-preference strategy

  5. Resolution Algorithm 1.Convert all the statements of F to clause form 2.Negate P and convert the result to clause form. Add it to the set of clauses obtained in 1. 3.Repeat till either a contradiction is found, no progress can be made, or a predetermined amount of effort has been expended

  6. A) Select 2 clauses . Call these the parent clauses • B) Resolve them together. The resolvent will be the disjunction of all the literals of both the parent clauses with appropriate substitutions performed and with the following exception:If there is one pair of literals T1 and ~T2 such that one of the parent clauses contains T1 and the other contains T2 and if T1 and T2 are unifiable, then neither T1 nor T2 should appear in the resolvant. We call T1 and T2 complementary literals. • If the resolvent is the empty clause, then a contradiction has been found. If it is not, then add it to the set of clauses available to the procedure

  7. Yet another Example • All people that are not poor and are smart are happy. Those people that read are not stupid. John can read and is wealthy. Happy people have exciting lives. • Can you find anyone with exciting life?

  8. The predicates • A X: ((~poor(x) AND smart(x)) happy(x) • A Y: (read(y)  smart(y)) • Read(john) AND wealthy (john) • All Z (happy(z) exciting (z)) • ~ some W (exciting(w))

  9. Clause Form • Poor(x) V ~ smart(x) V happy(x) • ~read(y) V smart(y) • Read(john) • ~poor(john) • ~happy(z) V exciting(z) • ~exciting(w) • Answer John has exciting life

  10. Game Playing • Games hold an inexplicable fascination for many people and the notion that computers might play games has existed at least as long as computers. • There were 2 reasons that games appeared to be a good domain in which to explore machine intelligence • They provide a structured task in which it is very easy to measure success or failure • They did not obviously require large amounts of knowledge. They were thought to be solvable by straightforward search from the starting state to a winning position

  11. Game Playing • The first of these reasons remains valid and accounts for continued interest in the area of game playing by machine. Unfortunately the second is not true for any but the simplest games • The average branching factor is around 35 • In an average game, each player might make 50 moves • So in order to examine the complete game tree, we would have to examine 35 raised to 100 positions

  12. Approach to Game Playing • Thus it is clear that a program that simply does a straightforward search of the game tree will not be able to select even its first move during the lifetime of its opponent. Some kind of heuristic search procedure is necessary. • One way of looking at all the search procedures we have discussed is that they are essentially generate-and-test procedures in which the testing is done after varying amounts of work by the generator

  13. Improving effectiveness • To improve the effectiveness of a search-based problem-solving program two things can be done: • Improve the generate procedure so that only good moves (or paths) are generated • Improve the test procedure so that the best moves (or paths) will be recognized and explored first

  14. Searching • In game-playing programs, it is particularly important that both these things be done. • There fore instead of a legal-move generator a plausible-move generator incorporates heuristic knowledge into both the generator and tester and the overall system can be improved • In game playing, as in other problem domains, search is not the only available technique. E.g in chess both opening and end games are highly stylized, so they are best played by table table lookup into a database of stored patterns.

  15. Static evaluation function • To play an entire game we need to combine search oriented and non-search oriented techniques • The ideal way to use a search procedure to find a solution to a problem is to generate moves through the problem space until a goal state is reached. Unfortunately for interesting games like chess it is not usually possible even with a good plausible move generator, to search until a goal state is found. The depth and branching factor are to great.

  16. Static Evaluation Function • In the amount of time available, it is usually possible to search a tree only ten or twenty moves deep (called ply). Then in order to choose the best move, the resulting board positions must be compared to discover which is most advantageous. This is done using a static evaluation function, which uses whatever information it has to evaluate individual board positions by estimating how much likely they are to lead eventually to a win

  17. Static evaluation functions • A lot of work in game-playing programs has gone into the development of good sefs. A very simple sef for chess was proposed by Turing-simply add the values of black pieces B and the values of white pieces W and then compute the quotient W/B. • A more sophisticated approach was used by samuel’s checkers program, in which the the sef was a linear combination of several simple functions each of which appeared as though it might be significant e.g piece advantage, capability for advancement, control of the center, threat of a fork and mobility

  18. Mini Max search Procedure • For a simple one person game or puzzle the A* algorithm can be used. It can be applied to reason forward from the current state as far as possible in the time allotted. But because of their adversarial nature, this procedure is inadequate for 2-person games such as chess. As values are pushed back up different assumptions must be made at levels where the program chooses the move and at the alternating levels where the opponent chooses

  19. Mini Max • The mini max search procedure is a depth-first, depth-limited search procedure. • The idea is to start at the current position and use the plausible-move generator to generate the set of possible successor positions. • Now we can apply the sef to those positions and simply choose the best one. • After doing so, we can back up the value to the starting position to represent our evaluation of it.

  20. Mini Max • The starting position is exactly as good for us as the position generated by the best move we can make next. • Here we assume that the static evaluation function returns large values to indicate good situations for us, so our goal is to maximize the value of the sef of the next board position

  21. Mini Max Example A -2 B -6 D -2 C -2 I -2 J -4 K -3 E 9 F -6 G 0 H 0

  22. Alpha Beta Cut off • We use Alpha Beta search procedure that is a slightly modified form of Min Max to reduce our search space • Show Example

  23. Final Exam Course • Introduction (Chap 1)* • Problems, Problem Spaces & Search (Chap 2)* • Depth first • Breadth First • Heuristic Search Techniques (3.1, 3.2, 3.3)* • Generate and test • Hill Climbing • Best First • A • A*

  24. Final Exam Continued… • Predicate Logic (Chap 5)*** • Uncertainty Issues (7.1)** • Game Playing (12.1, 12.2, 12.3)*** • Slides 1 to 11 ** • notes placed in copy center*** • Representing Knowledge using rules (6.1, 6.2, 6.3)*** • Backwards reasoning • forward reasoning * Shows importance in exam

More Related