1.02k likes | 1.14k Views
Explore the principles of AI problem-solving through search methods, including states, actions, goals, and utilities. Discover the power of search algorithms and their applications in various problem types.
E N D
Problem Solving as Search Foundations of Artificial Intelligence
Search and Knowledge Representation • Goal-based and utility-based agents require representation of: • states within the environment • actions and effects (effect of an action is transition from the current state to another state) • goals • utilities • Problems can often be formulated as a search problem • to satisfy a goal, agent must find a sequence of actions (a path in the state-space graph) from the starting state to a goal state. • To do this efficiently, agents must have the ability to reason with their knowledge about the world and the problem domain • which path to follow (which action to choose from) next • how to determine if a goal state is reached OR how decide if a satisfactory state has been reached. Foundations of Artificial Intelligence
Introduction to Search • Search is one of the most powerful approaches to problem solving in AI • Search is a universal problem solving mechanism that • Systematically explores the alternatives • Finds the sequence of steps towards a solution • Problem Space Hypothesis (Allen Newell, SOAR: An Architecture for General Intelligence.) • All goal-oriented symbolic activities occur in a problem space • Search in a problem space is claimed to be a completely general model of intelligence Foundations of Artificial Intelligence
Problem-Solving Agents • The agent follows a simple “formulate, search, execute” design function Simple-Problem-Solving-Agent(percept) returnsaction inputs:p, a percept static:s, an action sequence, initially empty state, a description of current world state g, a goal, initially null problem, a problem formulation state¬ Update-State(state, p) ifs = empty then g¬ Formulate-Goal(state) problem¬ Formulate-Problem(state, g) s¬ Search(problem) endif action¬ first(s) s¬ remainder(s) returnaction • Assumptions on Environment • Static: formulating and solving the problem does not take any changes into account • Discrete: enumerating all alternative courses of action • Deterministic: actions depend only on previous actions • Observable: initial state is completely known Foundations of Artificial Intelligence
1 2 3 Stating a Problem as a Search Problem S • State space S • Successor function: x in S SUCCESSORS(x) • Cost of a move • Initial state s0 • Goal test: for state x in S GOAL?(x) =T or F Foundations of Artificial Intelligence
Example (Romania) • Initial Situation • On Holiday in Romania; currently in Arad • Flight leaves tomorrow from Bucharest • Formulate Goal • be in Bucharest • Formulate Problem • states: various cities • operators: drive between cities • Find Solution • sequence of cities • must start at starting state and end in the goal state Foundations of Artificial Intelligence
Example (Romania) Foundations of Artificial Intelligence
Example: Vacuum World • Vacuum World • Let the world be consist two rooms • Each room may contain dirt • The agent may be in either room • initial: both rooms dirty • goal: both rooms clean • problem: • states: each state has two rooms which may contain dirt (8 possible) • actions: go from room to room; vacuum the dirt • Solution: • sequence of actions leading to clean rooms Foundations of Artificial Intelligence
Problem Types • Deterministic, fully-observable ==> single-state problem • agent has enough info. to know exactly which state it is in • outcome of actions are known • Deterministic, partially-observable ==> multiple-state problem • “sensorless problem”: Limited/no access to the world state; agent may have no idea which state it is in • require agent to reason about sets of states it can reach • Nondeterministic, partially-observable ==> contingency problem • must use sensors during execution; percepts provide new information about current state • no fixed action that guarantees a solution (must compute the whole tree) • often interleave search, execution • Unknown State Space ==> exploration problem (“online”) • only hope is to use learning (reinforcement learning) to determine potential results of actions, and information about states Foundations of Artificial Intelligence
Example: Vacuum World • Single-State • start in #5. Solutions? • Multiple-State • start in {1,2,3,4,5,6,7,8} • e.g., Right goes to {2,4,6,8}. Solutions? • Contingency • Start in #5 • e.g., Suck can dirty a clean carpet • Local sensing: dirt, location only. Solutions? Goal states Foundations of Artificial Intelligence
Single-state problem formulation • A problem is defined by four items: • initial state • e.g., ``at Arad'' • operators (or successor function S(x)) • e.g., Arad ==> Zerind Arad ==> Sibiu • goal test, can be • explicit, e.g., x = ``at Bucharest'' • implicit, e.g., NoDirt(x) • path cost (additive) • e.g., sum of distances, number of operators executed, etc. • A solution is a sequence of operators leading from the initial state to a goal state Foundations of Artificial Intelligence
Selecting a state space • Real world is absurdly complex • state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) operator = complex combination of real actions • e.g., “Arad ==> Zerind” represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state “in Arad” must get to some real state “in Zerind” • (Abstract) solution = set of real paths that are solutions in the real world • Each abstract action should be “easier” than the original problem! Foundations of Artificial Intelligence
Example: Vacuum World • States? integer dirt and robot locations (ignore dirt amounts) • Operators? Left, Right, Suck • Goal Test? no dirt • Path Cost? one per move What if the agent had no sensors: the multiple-state problem Goal states Foundations of Artificial Intelligence
Example: The 8-Puzzle • States? integer location of tiles • Operators? move blank left, right, up, down • Goal Test? = goal state (given) • Path Cost? One per move • Note: optimal solution of n-Puzzle problem is NP-hard Foundations of Artificial Intelligence
8 2 7 3 4 8 2 5 1 6 3 4 7 5 1 6 8 2 7 8 2 7 3 4 6 3 4 5 1 5 1 6 8-Puzzle: Successor Function Foundations of Artificial Intelligence
State-Space Graph • The state-space graph is a representation of all possible legal configurations of the problem resulting from applications of legal operators • each node in the graph is a representation a possible legal state • each directed edge is a representation of a possible legal move applied to a state (resulting in a new state of the problem) • States: • representation of states should provide all information necessary to describe relevant features of a problem state • Operators: • Operators may be simple functions representing legal actions; • Operators may be rules specifying an action given that a condition (set of constraints) on the current state is satisfied • In the latter case, the rules are sometimes referred to as “production rules” and the system is referred to as a production system • This is the case with simple reflex agents. Foundations of Artificial Intelligence
Vacuum World State-Space Graph • State-space graph does not include initial or goal states • Search Problem: Given specific initial and goal states, find a path in the graph from an initial to a goal state • An instance of a search problem can be represented as a “search tree” whose root note is the initial state Foundations of Artificial Intelligence
Solution to the Search Problem • A solution is a path connecting the initial to a goal node (any one) • The cost of a path is the sum of the edge costs along this path • An optimal solution is a solution path of minimum cost • There might be no solution ! Foundations of Artificial Intelligence
State Spaces Can be Very Large • 8-puzzle 9! = 362,880 states • 15-puzzle 16! ~ 1.3 x 1012 states • 24-puzzle 25! ~ 1025 states Foundations of Artificial Intelligence
Searching the State Space • Often it is not feasible to build a complete representation of the state graph • A problem solver must construct a solution by exploring a small portion of the graph • For a specific search problem (with a given initial and goal state) we can view the relevant portion as a search tree Foundations of Artificial Intelligence
Searching the State Space Foundations of Artificial Intelligence
Searching the State Space Search tree Foundations of Artificial Intelligence
Searching the State Space Search tree Foundations of Artificial Intelligence
Searching the State Space Search tree Foundations of Artificial Intelligence
Searching the State Space Search tree Foundations of Artificial Intelligence
Searching the State Space Search tree Foundations of Artificial Intelligence
. . . Portion of Search Space for an Instance of the 8-Puzzle Problem Foundations of Artificial Intelligence
Simple Problem-Solving Agent Algorithm • s0 sense/read initial state • GOAL? select/read goal test • Succ select/read successor function • solution search(s0, GOAL?, Succ) • perform(solution) Foundations of Artificial Intelligence
Example: Blocks World Problem • World consists of blocks A, B, C, and the Floor • Can move a block that is “clear” on top of another clear block or onto the Floor • State representation: using the predicate “on(x,y)” • on(x,y) means the block x is on top of block y • on(x, Floor) means block x is on the Floor • on(_, x) means block x has nothing on it (it is “clear”) • Can specify operators as a set of production rules: • 1. on(_, x) on (x, Floor) • 2. on(_, x) and on(_, y) on(x, y) • Initial state: some initial configuration • E.g., on(A, Floor) and on(C, A) and on(B, Floor) and on(_, B) and on(_, A) • Goal state: some specified configuration • E.g., on(B,C) and on(A,B) Foundations of Artificial Intelligence
A C B B A A C B C C B A C B B A C B A C B C A A C A B A C A B C C B B C A B A Blocks World: State-Space Graph on(_, x) on (x, Floor) on(_, x) and on(_, y) on(x, y) 1 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 1 1 2 2 2 2 1 1 2 2 Foundations of Artificial Intelligence
A C B B A A C B C C B A C B B A C B A C B C A A C A B A C A B C C B B C A B A Blocks World: A Search Problem Search tree for the problem A A B C B C • Notes: • Repeated states have been eliminated in diagram. • The highlighted path represents (in this case) the only solution for this instance of the problem. • The solution is a sequence of legal actions: move(A, Floor) move(B, C) move(A, B). Foundations of Artificial Intelligence
Some Other Problems Foundations of Artificial Intelligence
8-Queens Problem Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solution Not a solution Foundations of Artificial Intelligence
States: all arrangements of 0, 1, 2, ..., or 8 queens on the board Initial state: 0 queen on the board Successor function: each of the successors is obtained by adding one queen in an empty square Arc cost:irrelevant Goal test: 8 queens are on the board, with no two of them attacking each other Formulation #1 64x63x...x53 ~ 3x1014 states Foundations of Artificial Intelligence
Formulation #2 • States: all arrangements of k = 0, 1, 2, ..., or 8 queens in the k leftmost columns with no two queens attacking each other • Initial state: 0 queen on the board • Successor function: each successor is obtained by adding one queen in any square that is not attacked by any queen already in the board, in the leftmost empty column • Arc cost:irrelevant • Goal test: 8 queens are on the board 2,057 states Foundations of Artificial Intelligence
Path Planning What is the state space? Foundations of Artificial Intelligence
Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2 Formulation #1 Foundations of Artificial Intelligence
Optimal Solution This path is the shortest in the discretized state space, but not in the original continuous space Foundations of Artificial Intelligence
Cost of one step: length of segment Formulation #2 Visibility graph Foundations of Artificial Intelligence
Cost of one step: length of segment Formulation #2 Visibility graph Foundations of Artificial Intelligence
Solution Path The shortest path in this state space is also the shortest in the original continuous space Foundations of Artificial Intelligence
Search Strategies • Uninformed (blind, exhaustive) strategies use only the information available in the problem definition • Breadth-first search • Depth-first search • Uniform-cost search • Heuristicstrategies use “rules of thumb” based on the knowledge of domain to pick between alternatives at each step Graph Searching Applet: http://www.cs.ubc.ca/labs/lci/CIspace/Version4/search/index.html Foundations of Artificial Intelligence
Implementation of Search Algorithms function General-Search(problem, Queuing-Fn) returns a solution, or failure nodes¬ Make-Queue(Make-Node(Initial-State[problem])) loop do ifnodes = empty then return failure nodes¬ Remove-Front(nodes) if Goal-Test[problem] applied to State[node] succeeds then returnnode else nodes¬ Queuing-Fn(nodes, Expand(node, Operators[problem])) return • A state is a representation of a physical configuration • A node is a data structure constituting part of a search tree • includes parent, children, depth, or path cost • States don’t have parents, children, depth, or path cost • The Expand function creates new nodes, filling in various fields and using Operators (or SucessorFn) of the problem to create the corresponding states Foundations of Artificial Intelligence
Search Strategies • A strategy is defined by picking the order of node expansion • i.e., how expanded nodes are inserted into the queue • Strategies are evaluated along the following dimensions • completeness - does it always find a solution if one exists • time complexity - number of nodes generated / expanded • space complexity - maximum number of nodes in memory • optimality - does it always find a least-cost solution • Time and space complexity are measured in terms of: • b - maximum branching factor of the search tree • d - depth of the least-cost solution • m - maximum depth of the state space (may be ¥) Foundations of Artificial Intelligence
Recall: Searching the State Space Search tree Note that some states are visited multiple times Foundations of Artificial Intelligence
8 8 2 2 8 2 7 3 3 4 4 7 7 3 4 1 1 1 5 5 6 6 5 6 8 2 8 4 3 4 7 8 2 2 3 4 7 3 7 1 5 6 5 1 6 5 1 6 Search Nodes States Foundations of Artificial Intelligence
8 8 2 2 8 2 7 3 3 4 4 7 7 3 4 1 1 1 5 5 6 6 5 6 8 2 8 4 3 4 7 8 2 2 3 4 7 3 7 1 5 6 5 1 6 5 1 6 Search Nodes States If states are allowed to be revisited,the search tree may be infinite even when the state space is finite Foundations of Artificial Intelligence
PARENT-NODE 8 2 3 4 7 BOOKKEEPING STATE 5 1 6 CHILDREN Action Right Depth 5 ... Path-Cost 5 yes Expanded Data Structure of a Node Depth of a node N = length of path from root to N (Depth of the root = 0) Foundations of Artificial Intelligence