1 / 25

Introduction to Artificial Intelligence Blind Search

Introduction to Artificial Intelligence Blind Search. Ruth Bergman Fall 2002. Searching for Solutions. Partial search tree for route finding from Arad to Bucharest. goal test. Arad. (a) The initial state ( search node ). Arad. (b) After expanding Arad. Sibiu. Timisoara. Zerind.

dacian
Download Presentation

Introduction to Artificial Intelligence Blind Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Artificial IntelligenceBlind Search Ruth Bergman Fall 2002

  2. Searching for Solutions • Partial search tree for route finding from Arad to Bucharest. goal test Arad (a) The initial state (search node) Arad (b) After expanding Arad Sibiu Timisoara Zerind choosing one option Arad (c) After expanding Sibiu Sibiu Timisoara Zerind Which node to expand? Which nodes to store in memory? Rimnicu Vilcea Arad Fagaras Oradea

  3. Searching Strategies Depth-first Search • Expand deepest node first. • DFS (state path) if goalp(state) return path else for c in succ(state) return DFS(c, path | state)

  4. DFS implementation in Lisp (defun dfs (state) (cond ((goalp state) (list state)) (t (do* ((children (new-states state) (cdr children))) (solution (dfs (car children)) (dfs (car children))) ((or solution (null children)) (if solution (cons state solution) nil))))))

  5. Search Strategies • Criteria • Completeness: if there is a solution will the algorithm find it? • Time complexity: how much time does the algorithm take to arrive at a solution, if one exists? • Space complexity: how much space does the algorithm require? • Optimality: is the solution optimal?

  6. In-Completeness of DFS • DFS is not complete • fails in infinite-depth spaces, spaces with loops • Variants • limit depth of search • avoid re-visiting nodes. • avoid repeated states along path => complete in finite spaces

  7. DFS with depth limit (defun dfs-d (state depth) (cond ((goalp state) (list state)) ((zerop depth) nil) (t (do* ((children (new-states state) (cdr children))) (solution (dfs (car children) (1- depth)) (dfs (car children) (1- depth))) ((or solution (null children)) (if solution (cons state solution) nil))))))

  8. Searching Strategies DFS with depth limit Performance • Properties • Complete: No • Guaranteed to stop • Complete only if exists solution at level L<d (where d is the maximum depth) • Time complexity: O(b^d) • Best case L • Worst case (b^(d+1)-1)/(b-1) Where b is the branching factor • improved performance when there are many solutions • Space complexity: O(bd) • i.e., linear space • Optimal: No

  9. DFS with no revisits • avoid nodes that have already been expanded. => exponential space complexity. • Not practical.

  10. DFS with no repeated states 1 2 (defun dfs-d-g (state depth path) (cond ((goalp state) (list state)) ((zerop depth) nil) (t (do* ((children (new-states state) (cdr children))) (solution (if (member (car children) path) nil (dfs (car children) (1- depth) (cons state path)) …)) ((or solution (null children)) (if solution (cons state solution) nil)))))) 3 6 4 8 5 7 => Complete in finite spaces

  11. Searching Strategies Backtracking Search Backtracking search node discovery 1 When states are expanded by applying operators The algorithm expands one child at a time (by applying one operator) If search fails, backtrack and expand other children Backtracking search results in even lower memory requirements than DFS 2 9 3 6 10 13 4 8 11 14 15 5 7 12 DFS node discovery 1 2 3 4 5 10 11 6 9 12 14 15 7 8 13

  12. Searching Strategies DFS Summary • Advantages • Low space complexity • Good chance of success when there are many solutions. • Complete if there is a solution shorter than the depth limit. • Disadvantages • Without the depth limit search may continue down an infinite branch. • Solutions longer than the depth limit will not be found. • The solution found may not be the shortest solution.

  13. Searching Strategies Breadth-first Search • Expand node with minimal depth. • avoid revisting nodes. Since every node is in memory, the additional cost is negligible.

  14. BFS implementation in Lisp 1 2 3 (defun bfs (state) (let ((queue (list (list state nil)))) (do* ((state (caar queue) …) (children (new-states state) …)) ((or (null queue) (goalp state)) (if (null queue) nil (car state)) (setq queue (append (cdr queue) (mapcar #'(lambda (state) (cons state (car queue))) children))))))) 4 5 6 7 8 11 12 14 15 9 10 13

  15. Searching Strategies BFS Performance • Properties • Complete: Yes (if b is finite) • Time complexity: 1+b+b^2+…+b^l = O(b^l) • Space complexity: O(b^l) (keeps every node in memory) • Optimal: Yes (if cost=1 per step); not optimal in general • where b is branching factor and • l is the depth of the shortest solution

  16. Searching Strategies Uniform cost Search • Expand least-cost unexpanded node • the breadth-first search is just uniform cost search with g(n)=DEPTH(n) A 1 10 B 5 5 S G 15 5 C S S S S 0 A B C A B C A B C 5 15 15 1 5 15 G G G 11 11 10

  17. Searching Strategies Uniform cost Search • Properties of Depth-first Search • Complete: Yes, if step cost >= e (epsilon) • Time complexity: # of nodes with g <= cost of optimal solution, O(b^l) • Space complexity: # of nodes with g <= cost of optimal solution, O(b^l) • Optimal: Yes, if step cost >= e (epsilon)

  18. Searching Strategies Iterative Deepening Search • Combine the best of both worlds • Depth first search has linear memory requirements • Breadth first search gives an optimal solution. • Iterative Deepening Search executes depth first search with depth limit 1, then 2, 3, etc. until a solution is found. • The algorithm has no memory between searches.

  19. Searching Strategies Iterative Deepening Search • Limit=0 • Limit=1 • Limit=2 • Limit=3 …

  20. Searching Strategies Iterative Deepening Search • Properties • Complete: Yes • Time complexity: (l+1)*b^0+l*b+(l-1)*b^2+…+1*b^l = O(b^l) • Space complexity: O(bl) • Optimal: Yes, if step cost = 1 • Can be modified to explore uniform-cost tree

  21. Searching Strategies Iterative Deepening Search - Discussion • Numerical demonstration: Let b=10, l=5. • BFS resource use (memory and # nodes expanded) 1+10+100+1000+10000+100000 = 111,111 • Iterative Deepening resource use • Memory requirement: 10*5 = 50 • # expanded nodes 6+50+400+3000+20000+100000 = 123,456 => re-searching cost is small compared with the cost of expanding the leaves

  22. Start Goal Searching Strategies Bidirectional Search • Simultaneously search both forward from the initial state and backward from the goal, and stop when the two searches meet in the middle.

  23. Searching Strategies Bidirectional Search Performance • Properties • Complete: Yes (using a complete search procedure for each half) • Time complexity: O(b^(l/2)) • Space complexity: O(b^(l/2)) • Optimal: Yes, if step cost = 1 • Can be modified to explore uniform-cost tree

  24. Bidirectional Search Discussion • Numerical Example (b=10, l = 5) • Bi-directional search finds solution at d=3 for both forward and backward search. Assuming BFS in each half 2222 nodes are expanded. • Implementation issues: • Operators are reversible. • There may be many possible goal states. • Check if a node appears in the “other” search tree. • What’s the best search strategy in each half.

  25. Searching Strategies Comparison Search Strategies • b is the branching factor; • l is the depth of solution; • m is the maximum depth of the search tree; • d is the depth limit.

More Related