330 likes | 448 Views
CS 416 Artificial Intelligence. Lecture 7 Informed Searches. Chess Match. Kasparov 1, Deep Junior 1, Draws 3. Chess Article. You can buy Deep Blue for $50 (maybe not tweaked to beat Kasparov) Style and strategy vs. knowledge (depth vs. breadth)
E N D
CS 416Artificial Intelligence Lecture 7 Informed Searches
Chess Match • Kasparov 1, Deep Junior 1, Draws 3
Chess Article • You can buy Deep Blue for $50 (maybe not tweaked to beat Kasparov) • Style and strategy vs. knowledge (depth vs. breadth) • 1,000 top-quality games played each week and broadcast on internet • “People don’t experiment anymore” • Kasparov pays a team of grandmasters to scour the web daily looking for new opening strategies • “Computers are opening the game up” • people are more likely to follow a range of published strategies • http://www.nytimes.com/2003/02/06/nyregion/06CHES.html
Chess Article • Have we been down this road before? • The Road to Wigan Wier, 1900s (George Orwell) • “machines are moving in and polluting the spiritual landscape, not on purpose, but because they can't help it“ • David Gelernter, Yale • “This is the machine age, and no one uses a pump when you can turn on the tap. But don't think that won't cost us."
New Horizons? • Hans Berliner, CMU • “You don't have to be really good anymore to get good results. Chess is winding down." • Bobby Fisher • randomize pieces behind row of pawns • Kasparov • computer/human vs. computer/human • Play the Asian game, Go • no human player would dare depend on computer for advice
Another article • SongPro The inventor, Ronald Jones (left), with venture capitalist, Mark Bush http://www.nytimes.com/2003/02/06/technology/circuits/06song.html
Tough Going • Grew up the son of a maid and one of six black students in a high school of 1400 • love of math and engineering • dropped out of college • learned on the job • Encountered resistance when finding funding opps • “Who does this technology really belong to?” • “Is this yours?”
Tough Going • Lucky to do contract for Rainbow Coalition and met Jesse Jackson who could help out • 3 years ago, Ron had $800 left and an idea • one lucky break got him a contact and $11,000 for debts • 2 years ago, Ron had lived with friends and family and lived off of $5 a day (he knew how to get free food at bars with happy hours) • Last year, the device was released • He still lives cheaply (Marc Hannah, a founder of Silicon Graphics and one of the richest black scientists in Silicon Valley)
Subproblems • Is 4-piece subproblem an admissible heuristic? • it can never overestimate the true cost • Is it consistent? • h(n) <= c(n, a, n’) + h(n’)
Genetic Algorithms (GAs) • Another randomized search algorithm • Start with k initial guesses • they form a population • each individual from the population is a fixed-length string (gene) • each individual’s fitness is evaluated • successors are generated from individuals according to fitness function results
What’s good about evolution? • Think about mother nature…
Genetic Algorithms • Reproduction • Reuse • Crossover • Mutation
Crossover } • Early states are diverse • Crossover explores state broadly • Later stages are more similar • Crossover fine tunes in small region Like simulated annealing
Mutation • Could screw up a good solution • Like metropolis step in simulated annealing • Could explore untapped part of search space
GA Analysis • Combines • uphill tendency • random exploration • exchange information between multiple threads • like stochastic beam search • Crossover is not needed – theoretically • if starting states are sufficiently random
GA Analysis • It’s all in the representation • GA works best if representation stores related pieces of the puzzle in neighboring cells of string • Not all problems are amenable to crossover • TSP
Continuous Spaces • What does continuous mean to you? • A function is continuous if its graph can be drawn without lifting the pencil from the paper Descarte
In terms of searching? • Continuous search spaces have neighbors for all states • That means they have derivatives • Can the derivative help out here?
Derivative directs future steps • One dimensional function • Left or right? • Two dimensional function • Direction in 3-space • N-dimensional function • Gradient
An Example • Place three airports • minimize sum of squared distances from each city to closest airport, f(x) • find (x1, y1, x2, y2, x3, y3)
Airport Example • Simulated Annealing • Trial and error experimentation with six values • Genetic Algorithms • Create a gene with six values and crossover/mutate • Discretize some slight change in position, d • each of six parameters has three values (+,-,same) • branching factor of 18 • A* search
Airport Example • Derivatives! • For given f (a, b, c, d, e, f), compute gradient • change in f in response to small change in a, then b, … • Update the vector, x • Beware of jumping too far if a too large • Beware of local min • each parameter may settle in its own local min
Computing the Gradient • Difficult to solve in closed form • compute f’(x)=y that works for all x • We can usually compute locally • compute f’(x)=y that works for only x near z • We can also compute empirically • pick some small offset to add to each element of x and compute difference beween f(x) and f(x+d)
Derivative = 0 at Max/Min • Newton-Raphson • Find the zero of an equation (where it crosses x-axis) • set p = p0 – f(p0) / f’(p0) • if p is close to p0 then return p • else set p0 = p and repeat • Draw picture
Newton-Raphson • Why set p = p0 – f(p0) / f’(p0) • y-intercept equation of line: y = mx + b • let y-intercept, b = f (p0) • let slope, m = f’(p0) • we want to find x-value where y-value = 0 • let y = 0 • 0 = f’(p0) * x + f(p0) • solve for x: -f(p0) / f’(p0)
But we’re not finding Zero of f(x) • We’re finding zero of gradient(x) • So, replace f(x) with gradient of x • replace f’(x) with second derivative of x • Hessian
Hessian • Second derivative of a multivariable function • Hf(x) = Hessian =
Newton Raphson • Final Equation
Online Searches • States and Actions are unknown apriori (before) • A real robot finding its way through a maze • State is difficult to change • A real robot cannot jump across state space at will to explore best potential paths • State is difficult/impossible to reverse • Can you ride your bike backwards? Do you have space to turn around? • When spelunking, do you send the smallest person through the tunnel first?
Online Searches • Difficult to skip around when using A* • Potential of irreversible dead end with depth-first search • Local search is perfect for online searches • stop when you cannot improve any further • chances are high of stopping at local solution • add memory to permit continued exploration with ability to return to best solution (the Brady Bunch again)
Learning in Online Search • Online agents must resolve ignorance • explore the world • build a map • mapping of (state, action) to results • also called a model relating (state, action) to results