770 likes | 1.16k Views
Artificial Intelligence. 3. Solving Problems By Searching. Goal Formulation Given situations we should adopt the “ goal ” 1 st step in problem solving! It is a set of world states, only those in which the goal is satisfied Action causes transition between world states. Problem Formulation
E N D
Artificial Intelligence 3. Solving Problems By Searching
Goal Formulation Given situations we should adopt the “goal” 1st step in problem solving! It is a set of world states, only those in which the goal is satisfied Action causes transition between world states Problem Formulation Process of deciding what actions and states to consider, and follows goal formulation Definition
Function SIMPLE-PROB-SOLV-AGENT(p) returns an action inputs: p; //percept Static: seq; // action sequence, initially empty state; //description of the current world g; //goal, initially null problem; //problem formulation state <- Update-State (state, p); If seq is empty then g <- Formulate-Goal(state) problem <- Formulate-Problem(state, g); seq <- Search(problem); action <- First(seq, state); seq <- Rest(seq); Return action Formulation of simple problem-solving agent
Examples (1) traveling On holiday in Taiif • Formulate Goal • Be after two days in Paris • Formulate Problem • States: various cities • Actions: drive/fly between cities • Find Solution • Sequence of cities: e.g., Taiif, Jeddah, Riyadh, Paris
Examples (2) Vacuum World • 8 possible world states • 3 possible actions: Left/Right/ Suck • Goal: clean up all the dirt= state(7) or state(8) • world is accessible agent’s sensors give enough information about which state it is in (so, it knows what each of its action does), then it calculate exactly which state it will be after any sequence of actions. Single-State problem • world is inaccessible agent has limited access to the world state, so it may have no sensors at all. It knows only that initial state is one of the set {1,2,3,4,5,6,7,8}. Multiple-States problem
Problem Definition • Initial state • Operator: description of an action • State space: all states reachable from the initial state by any sequence action • Path: sequence of actions leading from one state to another • Goal test: which the agent can apply to a single state description to determine if it is a goal state • Path cost function: assign a cost to a path which the sum of the costs of the individual actions along the path.
Vacuum World S1 S2 S3 S6 S5 S4 S7 S8 • States:S1 , S2 , S3 , S4 , S5 , S6 , S7 ,S8 • Operators: Go Left , Go Right , Suck • Goal test: no dirt left in both squares • Path Cost: each action costs 1.
Real-world problems • Routine finding • Routing in computer networks • Automated travel advisory system • Airline travel planning system • Goal: the best path between the origin and the destination • Travelling Salesperson problem (TSP) • Is a famous touring problem in which each city must be visited exactly once. • Goal: shortest tour
(a) The initial state Mecca (b) After expanding Jeddah Taiif Medina Al-Qassim Riyadh Taiif Riyadh Riyadh
Data Structure for Search Tree • DataType Node: data structure with 5 components • Components: • State • Parent-node • Operator • Depth • Path-cost
Search Strategies The strategies are evaluated based on 4 criteria: • Completeness: always find solution when there is one • Time Complexity: how long does it take to find a solution • Space Complexity: how much memory does it need to perform the search • Optimality: does the strategy find the highest-quality solution
Example: vacuum world • Single-state, start in #5. Solution?
Example: vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?
Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?
Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]
Vacuum world state space graph • states? • actions? • goal test? • path cost?
Vacuum world state space graph • states?integer dirt and robot location • actions?Left, Right, Suck • goal test?no dirt at all locations • path cost?1 per action
Example: The 8-puzzle • states? • actions? • goal test? • path cost?
Example: The 8-puzzle • states?locations of tiles • actions?move blank left, right, up, down • goal test?= goal state (given) • path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]
Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute
Tree search algorithms • Basic idea: • offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
Single-state problem formulation A problem is defined by four items: • initial state e.g., "at Arad" • actions or successor functionS(x) = set of action–state pairs • e.g., S(Arad) = {<Arad Zerind, Zerind>, … } • goal test, can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., Checkmate(x) • path cost (additive) • e.g., sum of distances, number of actions executed, etc. • c(x,a,y) is the step cost, assumed to be ≥ 0 • A solution is a sequence of actions leading from the initial state to a goal state
Selecting a state space • Real world is very complex state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) action = complex combination of real actions • e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" • (Abstract) solution = • set of real paths that are solutions in the real world • Each abstract action should be "easier" than the original problem
Implementation: states vs. nodes • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree includes state, parent node, action, path costg(x), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be ∞)
Uninformed search strategies • Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) • Space is the bigger problem (more than time)
Breadth-first search tree sample Branching factor:number of nodes generated by a node parent (we called here “b”) Here after b=2 0 expansion 1 expansion 2 expansions 3 expansions
Breadth First Complexity • The root generates (b) new nodes • Each of which generates (b) more nodes • So, the maximum number of nodes expended before finding a solution at level “d”, it is : 1+b+b2+b3+….+bd • Complexity is exponential = O(bd)
Breadth First Algorithm Void breadth () { queue=[]; //initialize the empty queue state = root_node; // initialize the start state while (! Is_goal( state ) ) { if !visited(state) do add_to_back_of_queue(successors(state)); markVisited(state); if queue empty return FAILURE; state = queue[0]; //state=first item in queue remove_first_item_from (queue); } return SUCCESS }
Time and memory requirement in Breadth-first Assume branching factor b=10; 1000 nodes explored/sec and 100 bytes/node
A 10 1 S G B 5 5 5 15 C Example (Routing Problem)
S S S S Sol= {S,A,G} & Cost = 11 C C C C 1 5 15 A A A A B B B B 1 1 5 5 15 15 G 11 G 11 G 20 G 10 New solution found but not better than what we have Sol= {S,B,G} & Cost = 10 New solution found better than the current Sol= {S,B,G} & Cost = 10 1 5 15 G 11 G 10 Solution of the Routing problem using Breadth-first Sol= empty & Cost = infinity
S S S S C C C A A A A B B B B 1 5 15 G 11 1 5 15 G 11 G 10 Solution of the Routing problem using Uniform Cost search 0 Sol= {S,A,G} & Cost = 11 1 5 15 Sol= empty & Cost = infinity C will not be expanded as its cost is greater than the current solution New solution found better than the current Sol= {S,B,G} & Cost = 10 C 1 5 15 G 11 G 10
Uniform-cost search • Expand least-cost unexpanded node • Implementation: • fringe = queue ordered by path cost • Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ εTime? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution • Space? # of nodes with g≤ cost of optimal solution, O(bceiling(C*/ ε))Optimal? Yes – nodes expanded in increasing order of g(n)
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front