1 / 46

CS3243: Introduction to Artificial Intelligence

CS3243: Introduction to Artificial Intelligence. Semester 2, 2017/2018. Previously…. Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search

carlsonb
Download Presentation

CS3243: Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS3243: Introduction to Artificial Intelligence Semester 2, 2017/2018

  2. Previously… • Uninformed search strategies use only the information available in the problem definition • Breadth-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search • This class –exploit additional information!

  3. BFS and IDS are complete if is finite. UCS is complete if is finite and step cost BFS and IDS are optimal if step costs are identical.

  4. Choosing a Search Strategy • Depends on the problem: • Finite/infinite depth of search tree • Known/unknown solution depth • Repeated states • Identical/non-identical step costs • Completeness and optimality needed? • Resource constraints (e.g., time, space)

  5. Can We Do Better? • Yes! Exploit problem-specific knowledge; obtain heuristics to guide search • Today: • Informed (heuristic) search • Order of node expansion still matters: Which nodes are “more promising”?

  6. INFORMED SEARCH AIMA Chapter 3.5.1 – 3.5.2, 3.6.1 – 3.6.2

  7. Outline • Best-first search • Greedy best-first search • A* search • Heuristics

  8. CMU’s Autonomous Car 2017

  9. Best-First Search • Idea: use an evaluation functionfor each node • Cost estimate  Expand node with lowest evaluation/cost first • Implementation: Frontier = priority queue ordered by nondecreasing cost • Special cases (different choices of ): • Greedy best-first search • A*search

  10. Romania with Step Costs (km) Oradea Neamt Zerind Iasi Arad Sibiu Vaslui Fagaras Timisoara Rim. Velcea Urziceni Pitesti Lugoj Mehadia Bucharest Drobeta Giurgiu Craiova

  11. Greedy Best-First Search • Evaluation function (heuristic function)= estimated cost of cheapest path from to goal • e.g., straight-line distance from to Bucharest • Greedy best-first search expands the node that appears to be closest to goal

  12. Oradea Neamt Zerind Iasi Arad Sibiu Vaslui Fagaras Timisoara Rim. Velcea Urziceni Pitesti Lugoj Mehadia Bucharest Drobeta Giurgiu Craiova • Fagaras • Bucharest • Sibiu • Rim. Velcea • Arad • Timisoara • Oradea • Zerind

  13. Properties of Greedy Best-First Search • Yes (if is finite) • No (shortest path to Bucharest: ) • , but a good heuristic can reduce complexity substantially • Max size of frontier What Important Information Does the Algorithm Ignore?

  14. Search • Idea: Avoid expanding paths that are already expensive • Evaluation function • cost of reaching from start node • cost estimate from to goal • estimated cost of cheapest path through to goal

  15. Oradea Neamt Zerind Iasi Arad Sibiu Vaslui Fagaras Timisoara Rim. Velcea Urziceni Pitesti Lugoj Mehadia Bucharest Drobeta Giurgiu Craiova • Fagaras • Pitesti • Bucharest • Sibiu • Rim. Velcea • Craiova • Arad • Timisoara • Oradea • Zerind

  16. Admissible Heuristics • A heuristic is admissibleif, for every node , where is the true cost to reach the goal state from n. • An admissible heuristic never overestimatesthe cost to reach the goal, i.e., it is optimistic • Example: never overestimates the actual road distance

  17. Admissible Heuristics Theorem: If is admissible, then usingTree-Searchis optimal

  18. Optimality of A* using Tree-Search Proof: Suppose some suboptimal goal has been generated and is in the frontier. Let be an unexpanded node in the frontier such that is on a shortest path to an optimal goal . Then • Since , • So

  19. Optimality of A* using Tree-Search • from previous slide • Since is admissible, • Therefore: so will never select for expansion.

  20. Consistent Heuristics • A heuristic is consistentif, for every node and every successor of generated by any action , • If is consistent, we have i.e., is non-decreasing along any path.

  21. Consistent Heuristics Theorem: If is consistent, then using Graph-Search is optimal

  22. Optimality of using Graph-Search • is non-decreasing along any path • expands nodes in non-decreasing order of value • Gradually adds “-contours” of nodes • Contour has all nodes with where

  23. Optimality of using Graph-Search Proof Idea: • If is consistent, then is non-decreasing along any path. • Whenever selects a node (e.g., repeated state) for expansion, the optimal path to has been found. • We know that expands nodes in non-decreasing order of. • First expanded goal node is optimal solution: • is true cost for goal nodes (since ) • all later goal nodes are at least as expensive

  24. Properties of Search • Yes (if there is a finite no. of nodes with ) • Yes • where is actual cost of getting from root to goal. • Max size of frontier

  25. Admissible vs. Consistent Heuristics • Why is consistency a stronger sufficient condition than admissibility? How are they related? • A consistent heuristic is admissible • An admissible heuristic MAY be inconsistent • An admissible but inconsistent heuristic cannot guarantee optimality of A* using Graph-Search • Problem: Graph-Search discards new paths to a repeated state. So, it may discard the optimal path. Previous proof breaks down. • Solution: Ensure that optimal path to any repeated state is always followed first.

  26. Admissible Heuristics • Let’s revisit the 8-puzzle • Branching factor is about • Average solution depth is about steps • Exhaustive tree search examines states • How do we come up with good heuristics?

  27. Admissible Heuristics E.g., 8-puzzle: • number of misplaced tiles • total Manhattan distance (i.e., no. of squares from desired location of each tile)

  28. Dominance • If for all (both admissible), then dominates . It follows that incurs lower search cost than. Average search costs (nodes generated):

  29. Deriving Admissible Heuristics • A problem with fewer restrictions on the actions is called a relaxed problem • The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem

  30. Deriving Admissible Heuristics • Rules of 8-puzzle: A tile can move from square to square if is horizontally or vertically adjacent to and is blank • We can generate three relaxed problems • A tile can move from square to square if is adjacent to • A tile can move from square to square ifis blank • A tile can move from square to square

  31. Deriving Admissible Heuristics • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then is the resulting heuristic. • If the rules are relaxed so that a tile can move to any adjacent square, then (Manhattan Dist.) is the resulting heuristic

  32. LOCAL SEARCH AIMA Chapter 4.1

  33. Outline • Local search algorithms • Hill-climbing search • Simulated annealing • Local beam search • Genetic algorithms

  34. Local Search Algorithms • The pathto goal is irrelevant; the goal state itself is the solution • State space = set of “complete” configurations • Find final configuration satisfying constraints, e.g., -queens • Local search algorithms: maintain single “current best” state and try to improve it • Advantages: • very little/constant memory • find reasonable solutions in large state space

  35. Example: -queens • Put queens on an board with no two queens on the same row, column, or diagonal

  36. Hill-Climbing Search “Like climbing Mt. Everest in thick fog with amnesia”

  37. Hill-Climbing Search • Problem: depending on initial state, can get stuck in local maxima • Non-guaranteed fixes: sideway moves, random restarts

  38. Hill-Climbing Search: 8-Queens • number of pairs of queens that are attacking each other, either directly or indirectly • for the above state

  39. Hill-Climbing Search: 8-Queens Local Minimum with

  40. Simulated Annealing Idea: escape local maxima by allowing some “downhill” moves but gradually decrease their frequency

  41. Properties ofSimulated Annealing • If decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching • Widely used in VLSI layout, factory scheduling, and other large-scale optimization tasks

  42. Local Beam Search Keep BestStates() • Why keep just one best state? • Problem: May quickly concentrate in small region of state space  Solution: stochastic beam search

  43. Genetic Algorithms • Idea: a successor state is generated by combining two parent states • Start with randomly generated states (population) • A state is represented as a string over a finite alphabet (often a string of s and s) • Evaluation function (fitness function): higher values for better states • Produce the next generation of states by selection, crossover, and mutation

  44. Genetic Algorithms Example • Fitness function: number of non-attacking pairs of queens () • etc.

  45. Genetic Algorithms: Crossover

  46. Exploration Problems • Exploration problems: agent physically in some part of the state space. • e.g. find hotspot using an agent or agents with local temperature sensors • Sensible to expand states easily accessible to agent (i.e. local states) • Local search algorithms apply (e.g., hill-climbing)

More Related