100 likes | 113 Views
Explore search strategies in AI focusing on forwards and backwards search, state evaluation, goal conditions, and search algorithms. Learn about uninformed and informed search strategies and their application in AI domains.
E N D
Introduction to AI &AI Principles (Semester 1)WEEK 10 (07/08) [John Barnden’s slides only] School of Computer Science University of Birmingham, UK
SEARCH • You should get the details from readings: • Callan book: Chapter 3 • John Bullinaria’s slides (week 8)
REVIEW: Towards “Search” • In planning, one can mentally “search” through possible states of the world you could get to, or that would be useful to get to, by imagining doing actions. • (FORWARDS SEARCH) If I do this, then that would happen, and then if I do this, that would come about, or if instead I did this then that would happen, … … … … … … … • OR • (BACKWARDS SEARCH) To get such and such a (sub-)goal state, I could perhaps do this action from such and such another state, and to get to that state I could perhaps do so-and-so, or alternatively I could have done such and such … … … …
REVIEW: Towards Search, contd. • What order to investigate the actions possible in or towards any given state? Investigate all or just some? All in a bunch, or at different points in the search? • Follow a line of investigation as far as you can, and then hop back to a choice point if not getting anywhere? • Any limit on the number of states investigated, or on how far you follow any given line? • How can you measure how promising a state is? • How to take care of unexpected world conditions or changes, or unexpected effects of your own actions?
New on Search • We have encountered search in at least the following forms: • The overall operation of production systems • Planning, navigation • Move/action choice, path choice, etc. in games • Evolutionary computing • Intersection search in semantic networks • Automatically finding good weights for a neural network (& you will see analogous forms of search in other machine learning schemes) • Finding a model of the environment in robot/vision systems.
Search, contd. • Search is a matter of investigating a set of possibilities, of any sort, by going from one to the other by means of certain specified operations. • Potentially important across all AI areas. • The “possibilities” are usually called “states”. They typically describe states of some world (real or artificial) outside the search, or some objects, or suggested solutions (partial or full) to a problem, or designs for something, etc. etc.
Search, contd. • A search problem consists of the following pre-defined things: • A set of states (infinitely many, possibly ). • A set of possible operations (finitely many, usually) that can transform states into other states, and a specification of their effects on states. • When operations represent things the agent could do in some environment represented by the states: a way of determining the cost of actually applying any particular sequence of operations starting from a particular state. • Cost usually found by adding up the costs of individual operations along the sequence. The cost is often just the sequence length. • A particular initial state, and either a particular goal state (or listed set of alternative goal states), or a goal condition: a specification of what goal state looks like. • In some cases, very many states, perhaps even all, count as goals: the question then is of getting to a best possible goal, according to some goal evaluation function. (This is the situation assumed by the “Hill Climbing” search strategy.)
Search, contd. • Possible aims in a search problem: • When only a goal condition is given: discover one or more specific goals (and perhaps a best possible goal). E.g.: a particular design for a building. • When either a specific goal state or a goal condition is given: discover one or more solution paths: each beinga sequence of operations going from the initial state to a goal state. • This is the more typical case. Applies to planning, for example. • In addition: usually want reasonably good solution paths in terms of length or cost; and may even want the optimal solution path(s).
Summary of Bullinaria’s Slides • Mainly about different overall search strategies, concerned with what order to look at states in, giving different “shapes” to the search process. The slides concentrate on forwards search. • Uninformed search strategies: the strategy does not use any knowledge other than the successor function (the algorithm for working out what effect an operation has on a state), the initial state, the goals and the cost function. • Informed search strategies: also use some heuristic algorithm that estimates how promising an operation is at a given state, or how promising a given state is from the point of view of reaching a goal, or how good as a goal a state is. The algorithm typically uses additional knowledge about the domain. • The algorithm is often in the form of a heuristic function: this estimates the remaining least possible cost from the state to a goal.
Bullinaria’s Slides, contd. • Uninformed strategies: centred on • Depth-first search • Breadth-first search (guarantees shortest solution paths) • Iterative deepening: compromise between depth-first and breadth-first. • Informed strategies: • Best-first search (uses heuristic function), & special case: • A* search (can guarantee optimal [=lowest cost] solution paths) • Hill Climbing.