280 likes | 406 Views
Lecture 13. Last time: Games, minimax, alpha-beta Today: Finish off games, summary. Alpha-Beta Pruning. Can allow us to look twice as far ahead, but only if the tree is perfectly ordered. Look at the best moves first, use the evaluation function to order the tree.
E N D
159.302 Lecture 13 • Last time: • Games, minimax, alpha-beta • Today: • Finish off games, summary
159.302 Alpha-Beta Pruning • Can allow us to look twice as far ahead, but only if the tree is perfectly ordered. • Look at the best moves first, use the evaluation function to order the tree. • Allows us to get close to perfect ordering.
159.302 Heuristic Continuation (quiescence search) • You search for N ply in a tree, and find a very good move. But, perhaps, if you had searched just one ply further you would have discovered that this move is actually very bad. • In general: • The analysis may stop just before your opponent captures one of your pieces or the analysis stops just before you capture one your opponent's pieces. • This is called the horizon effect: a good (or bad) move may be just over the horizon.
159.302 The Singular Extension Heuristic: • Search should continue as long as one move's evaluation stands out from the rest. • If we don't use this heuristic, we risk harm from the Horizon Effect. • e.g. Here, black is aheadin material but if white canreach the eighth row with it's pawn then it can win. Black can stall this for some time and so willnever see that this is abad position.
159.302 Forward Pruning • Human players usually prune near the top of the search tree, a good chess player will only consider a few of the possible moves. • This is called forward pruning • It needs a very good evaluation function to work well.
159.302 Games of chance • Many games, such as backgammon, involve chance. • How can wedraw a gametree for this?
159.302 Expectiminimax • Add chancenodes to the game tree(circles)
159.302 Expectiminimax • Now each position has no known outcome only an expected outcome. Chance nodes are evaluated by taking the weighted average of values from all possible outcomes. • emmx(C)=Σi(p(di).emmx(si)) • Where • C is a chance node • di is a dice roll • p(di) is the probability a dice roll occurring • si is the successor state associated with the dice roll.
159.302 Complexity of Expectiminimax • Time complexity is now • O(bmnm) • where • b is the branching factor • n is the number of chance nodes for each max or min node • m is the maximum depth. • This extra cost can make games of chance very difficult to solve.
159.302 State of the Art in Games • Chess • Deep Blue (480 custom chips to perform evaluation function) beat Garry Kasparov in 1997 • Draughts (checkers) • Chinook beat Marian Tinsley in 1994 after 40 years as world champion (only losing 3 games in that time) • Backgammon • td-gammon in top 3 players in the world • Go • b>300 so very difficult, best programs are easily beaten by good humans • Bridge • GIB finished 12th out of 35 in 1998
159.302 Introduction • What is AI? • getting computers to perform tasks which require intelligence when performed by humans • Is it possible? • Let's hope so!
159.302 Introduction • How would we know if a computer was intelligent? • What is the Turing test? • What's wrong with the Turing test? • It's too easy? • What is the Chinese room problem? • It's too hard? • Types of AI Tasks • Mundane Tasks - easy for humans • Formal Tasks - hard for humans • Expert Tasks
159.302 Agents • What is AI's Underlying assumption: • The Physical Symbol System Hypothesis • How can a machine solve problems? • by searching for solutions. • What is an agent? • What is some Agent Terminology? • What is a Percept? • What is a Percept sequence? • What is an Agent function? • What is the Agent program?
159.302 Agents • What is a Rational agent? • How do we measure success of an agent? • What is a Performance measure? • Is a rational agent perfect? • What is an autonomous agent?
159.302 Agents • How do you design an agent? • What is the task environment? • What types of environment are there? • What is Fully observable, Deterministic, Episodic, Static, Discrete and Single agent? • How do you write the agent program? • what is a table driven agent? • Is this practical? • What is a simple reflex agent? • What is a model-based reflex agent?
159.302 Agents • What is a goal-based agent? • What is a utility based agent? • What is the structure of a general Learning agent?
159.302 Search • What is Search? • What is the problem space? • What does search do? • What is the problem space for the 8 puzzle? • What is the problem space for the Vacuum Cleaner world? • Why use a tree instead of a graph? • How are AI search algorithms different to standard search algorithms? • What is the Branching factor (b)? • What is the Solution depth (d)?
159.302 Search • What types of search are there? • What is uninformed search? • What is informed search? • What types of uninformed search are there? • What are the properties of search algorithms? • What is Completeness,Time Complexity,Space Complexity and Optimality • What is a node?
159.302 Search • What is breadth first search? • How is it implemented? • What are its properties? • What is Uniform cost search? • How is it implemented? • What are its properties? • What is depth-first search? • How is it implemented? • What are its properties?
159.302 Search • What is depth limited search? • How is it implemented? • What are its properties? • What is Iterative deepening? • How is it implemented? • What are its properties? • What is Bidirectional Search • How is it implemented? • What are its properties? • How to avoid repeated states?
159.302 Informed Search • What is an evaluation function? • What is a Heuristic Function? • What is greedy best first search? • What problems does greedy bfs have? • What is A* Search? • Why is it a good idea? • What is an admissible heuristic? • Why is A* optimal?
159.302 Search • What is the complexity of A* • How can the space complexity be improved? • What is Iterative-deepening A*? • What is Recursive best-first search? • What is Simple Memory-bounded A*? • How can you find a good heuristic? • What is dominance? • Can a heuristic be found automatically? • What are subproblems?
159.302 Local Search • What is local search • What is the state space landscape? • What is gradient descent (hill climbing)? • What problems are there with gradient descent? • What is random restart? • What is simulated annealing? • What is local beam search? • What are genetic algorithms?
159.302 CSPs • What is a constraint satisfaction problem (CSP)? • How are constraints expressed? • What are some examples of real World CSPs? • What is backtracking search? • How can this be improved? • Which variable should be chosen next? • What is the minimum remaining values heuristic? • What is the degree heuristic? • Which value should be chosen next? • What is the least constraining value heuristic?
159.302 CSPs • What is forward checking? • What is constraint propagation?
159.302 Games • What types of game are there? • What is a game tree? • What is a successor function? • What is a terminal test? • What is a utility function?
159.302 MINIMAX • What is the Optimal Strategy for a game? • What is the minimax algorithm • What is the complexity of minimax? Time Complexity? Space Complexity? • What is an evaluation function? • What cutoff test should be used? • Can Iterative deepening be used?
159.302 Alpha-Beta Pruning? • What is Alpha-Beta pruning? How is it implemented • What is the effectiveness of alpha-beta pruning? • What are the maximum savings possible? • What savings does it usually give?