560 likes | 666 Views
CS 4700: Foundations of Artificial Intelligence. Prof. Carla P. Gomes gomes@cs.cornell.edu Module: Search I (Reading R&N: Chapter 3). Outline. Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms. Problem-solving agents.
E N D
CS 4700:Foundations of Artificial Intelligence Prof. Carla P. Gomes gomes@cs.cornell.edu Module: Search I (Reading R&N: Chapter 3)
Outline • Problem-solving agents • Problem types • Problem formulation • Example problems • Basic search algorithms
Problem-solving agents • Problem solving agents are Goal-based Agents. • Goal Formulation set of (desirable) world states. • 2. Problem formulation what actions and states to consider given a goal. • 3. Search for solution given the problem, search for a solution, a sequence of actions to achieve the goal. • 4. Execution of the solution
Example: Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: • be in Bucharest • Formulate problem: • actions: drive between cities • states: various cities • Find solution: • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest • Execution • drive from Arad to Bucharest according to the solution Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action. Is this really the case?
Increasing complexity Problem types • Deterministic, fully observablesingle-state problem • Agent knows exactly which state it will be in; solution is a sequence actions. • Non-observable sensorless problem (conformant problem) • Agent may have no idea where it is (no sensors); it reasons in terms of belief states; solution is a sequence actions (effects of actions certain). • Nondeterministic and/or partially observable contingency problem • Actions uncertain, percepts provide new information about current state • (adversarial problem if uncertainty comes from other agents) • Unknown state space and uncertain action effects exploration problem
Example: vacuum world • Single-state, start in #5. Solution?
Example: vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution? Goal: 7 or 8
Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?
Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]
Single-state problem formulation • A problem is defined by four items: • Initial state e.g., "at Arad" • Actions available to the agent • Common description successor functionS(x) = set of (action–state) pairs • e.g., S(Arad) = {<Arad Zerind, Zerind>, <Arad Sibiu, Sibiu>, … } • Note: • State space – graph in which the nodes are states and arcs between nodes are actions. • Initial state + successor function implicitly define state space. • Path in a state space is a sequence of states connected by a sequence of actions.
Single-state problem formulation • Goal test, which determines whether a given state is the goal state. • Can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., Checkmate(x) • 4. Path cost, which assigns a numeric cost to each path, reflecting performance measure of agent. • e.g., sum of distances, number of actions executed, etc. • Step cost of taking action a to go from state x to state y c(x,a,y); assumed to be ≥ 0, and additive. • A solution is a path (sequence of actions) leading from the initial state to a goal state. Optimal solution has the lowest path cost among all solutions.
Vacuum-cleaner world • Percepts: location and contents, e.g., [A, Dirty] • Actions: Left, Right, Suck • states? • actions? • goal test? • path cost?
Vacuum world state space graph • states?The agent is in one of n (2) locations, with or without dirt. (nx2n) • actions?Left, Right, Suck • goal test?no dirt at all locations? • path cost?1 per action
Example: The 8-puzzle • states? • actions? • goal test? • path cost?
Example: The 8-puzzle • states?locations of tiles • actions?move blank left, right, up, down • goal test?= goal state (given) • path cost? 1 per move • [Note: optimal solution of n-Puzzle family is NP-hard]
Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute
Real World Problems • Route finding problems; • Touring problems (end up in the same city) Traveling Salesman Problem • VLSI layout positioning millions of components and connections on a chip to minimize area, circuit delays, etc. • Robot navigation • Automatic assembly of complex objects • Protein design – sequence of amino acids that will fold into the 3-dimensional protein with the right properties.
Now that we have formulated the problem how do we find a solution? • Search through the state space. • We will consider search techniques that use an explicit search tree that is generated by the initial state + successor function initialize (initial node) Loop choose a node for expansion according to strategy; goal node? done expand node with successor function
Search is a central topic in AI. • Originated with Newell and Simon’s work on problem solving; • Human Problem Solving (1972). • Automated reasoning is a natural search task. • More recently: Given that almost all AI formalisms • (planning, learning, etc) are NP-Complete or worse, • Some form of search is generally unavoidable • (i.e., no smarter algorithm available).
Implementation: states vs. nodes • A state is a --- representation of --- a physical configuration • A node is a data structure constituting part of a search tree includes state, tree parent node, action (applied to parent),path cost (initial state to node) g(x), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFnof the problem to create the corresponding states. Fringe is the collection of nodes that have been generated but not expanded (queue different types of queues insert elements in different orders); Each node of the fringe is a leaf node.
Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be ∞)
Uninformed search strategies • Uninformed(blind) search strategies use only the information available in the problem definition: • Breadth-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search • Bidirectional search
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end
Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if all step costs are identical) • Space is the bigger problem (more than time)
Uniform-cost search • Expand least-cost unexpanded node (instead of shallowest) (1) • Implementation: • fringe = queue ordered by path cost • (1) Equivalent to breadth-first if step costs all equal • Complete? Yes, if b is finite, step cost ≥ ε(>0) • Time? # of nodes with g ≤ cost of optimal solution (C*),O(b(1+C*/ ε)) • Space? # of nodes with g≤ cost of optimal solution, O(b(1+ C*/ ε )) • Optimal? Yes – nodes expanded in increasing order of g(n) (1) Same as breadth-first search is step costs are equal
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO (stack) queue, i.e., put successors at front
Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front
Properties of depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops • Modify to avoid repeated states along path complete in finite spaces • Time?O(bm): terrible if m is much larger than d • but if solutions are dense, may be much faster than breadth-first • Space?O(bm), i.e., linear space! • Optimal? No m – maximum depth of state sapce. d – depth of shallowest (least cost) solution Note: In backtracking search only one successor is generated only O(m) memory is needed; also successor is modification of the current state, but we have to be able to undo each modification.
Iterative deepening search Why would one do that? Combine good memory requirements of depth-first with the completeness of breadth-first when branching factor is finite and its optimality when the path cost is a non-decreasing function of the depth of the node.