1 / 22

Chap 3 Solving Problems by Search

Chap 3 Solving Problems by Search. AI technique 1) abstract symbol manipulation 2) use of knowledge 3) search Why search? logic --> algebraic --> analytic --> geometric --> statistic --> heuristic. 3-1. Problem solving agents 3-2. Formulating problems 3-3. Example problems

bary
Download Presentation

Chap 3 Solving Problems by Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chap 3 Solving Problems by Search • AI technique 1) abstract symbol manipulation 2) use of knowledge 3) search • Why search? • logic --> algebraic --> analytic --> geometric --> statistic --> heuristic

  2. 3-1. Problem solving agents 3-2. Formulating problems 3-3. Example problems 3-4. Searching by solutions 3-5. Search strategies 3-6. Avoiding repeated states 3-7. Constraint satisfaction problem

  3. 3-1. Problem-solving agents • problem-solving agent  goal-based agent • a simple problem-solving agent • pseudo code [fig 3.1, pp. 57] • example: Road to Bucharest [fig 3.3, pp. 62] • perception --> goal --> problem formulation --> solving --> action

  4. 3-2. Formulating problems • four types of problems • example: cleaning a room [fig 3.2, pp. 58] • 8 states, 3 actions 1) single state problem 2) multi state problem 3) contingency problem 4) exploration problem

  5. 1) single-state problem • environment -- accessible (knows where it is now) • action -- knows the effect • Action sequence can be completely planned. 2) Multiple-state problem • environment -- partially accessible (e.g., no sensor, with map) • action -- knows the effect • Agent must reason about the state (after its action).

  6. 3) Contingency problem • Action prediction is not possible. • Most of the real world problem (e.g., walking -- That’s why we open our eyes.) • Sometimes, action before planning is useful. (e.g., interleaving -- action & search game playing ) • Chapter 13. Planning and acting 4) Exploration problem • environment -- completely unknown (no sensor, no map) • Chapter 20. Re-inforcement learning)

  7. state-space search problem • benefit, utility -- quality of the goal • cost -- computational expenses • rule application cost -- # of expanded nodes • control strategy cost search space state (of environment + agent ) - initial state - goal state - operator (rule)

  8. measuring performance • solution? • good solution? (high benefit?) • little cost? • The real art of the problem solving is in deciding … • what to consider? • what to be left out? • level of abstraction • Reference “How to solve it?”, G. Polya, Princeton Univ Press, 1945, 1957. trade-off

  9. 3-3. Example problems • toy problem vs. real-world problem • small • abstract • exact (deterministic) • why toy problem? • abstract version of real problem [examples: birthday party, pp. 746 ~ 748] • to test AI techniques • Hope the toy problems are scalable.

  10. 8 - puzzle (or sliding block puzzle) [fig 3.4, pp. 63] • NP-complete • 8 - queen problem [fig 3.5, pp. 64] • 1948 a German chess magazine • 1950 Gauss found 72 out of 92 solutions. • cryptic - arithmetic [pp. 65]

  11. vacuum world [fig 3.6, pp. 66] i) complete information about the world ii) incomplete information about the world • missionaries and cannibals • problem 3 missionaries 3 cannibals 1 boat that can hold 1 ~ 2 # of missionaries  # of cannibals • 1’st example of AI research -- Amarel 1968

  12. real-world problems 1) route finding e.g., routing in computer network airline planning 2) touring and TSP 3) VLSI layout See [Sec. 25.5, pp. 790] 4) robot navigation • generation of route • continuous space • See [Winston, fig 5.6 ~ 5.10] 5) Internet search engine

  13. 3-4. Searching for solutions • search algorithm [pp. 64, Nilsson] Create a search graph, GRAPH GRAPH <--- { s } OPEN <--- { s } CLOSE <--- { } if OPEN = { } , then exit ( fail ) n <--- SELECT ( OPEN ) ; search strategy CLOSED <--- CLOSED  { n } if n = goal , then exit ( success ) CHILDREN <--- EXPAND ( n ) for each m  CHILDREN, modify GRAPH (I.e., CLOSED, OPEN) REORDER (OPEN)

  14. 3-5. Search strategies • criteria • completeness -- Find a solution when there is one? • admissibility (optimality) -- Find the best solution? • time complexity • space complexity • un-informed search ( vs. informed search) • blind search • no information about the cost from the current to the goal. • Consider the cost from the start to the current = g(n)

  15. 1) Breadth-first search • g(n) = depth(n) • completeness -- yes • admissible -- no • time complexity -- bad [fig 3.12, pp. 75] • space complexity -- worse 2) Uniform cost search • lowest-cost node from OPEN [fig 3.13, pp. 76] • SELECT (OPEN) ==> min { OPEN } wrt g(n) • if g (SUCCESSOR(n))  g(n)) then admissible. (I.e., non-decreasing path cost)

  16. 3) Depth-first search • completeness • admissible • time complexity • space complexity

  17. 4) Depth-limited search • depth-first • cut-off on the max depth of a path (e.g., diameter of the graph) • time complexity • space complexity • completeness • admissible • problem -- How to determine the cut-off?

  18. 5) Iterative deepening search • Try all possible depth limits. [fig 3.16, pp. 79] • Combine the benefits of depth-first and breadth first. (small memory) (completeness) • Some states are expanded multiple times. But the overhead is small. • # of expanded nodes • breadth first • iterative deepening • if b = 5, d = 5, 111,111 vs. 123,456 (11% more)

  19. 6) Bi-directional search • search in both directions [fig 3.17, pp. 81] • time complexity • space complexity • problems • generating predecessors from the goal. • multiple goals • implicit goal (example: 바둑) • checking the duplicacy on the other side.

  20. 7) Comparison table [fig 3.18, pp 81]

  21. 3-6. Avoiding repeated states • theorem operators are States may repeat. reversible. • solution I) Do not return to the parent. 2) Do not create a cycle. 3) Do not create CLOSED.

  22. 3-7. Constraint satisfaction search • state • constraints • unary, binary, n-ary • discrete, continuous • technique • relaxation

More Related