1 / 22

Satisfiability Solvers

Discover the magic ratio 4.26 in random 3-SAT problems and its relation to phase transitions in problem solubility. Learn about local search algorithms for SAT problems and the importance of picking the right variables to optimize clause satisfaction.

Download Presentation

Satisfiability Solvers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Satisfiability Solvers Part 2: Stochastic Solvers 600.325/425 Declarative Methods - J. Eisner

  2. Structured vs. Random Problems • So far, we’ve been dealing with SAT problems that encode other problems • Most not as hard as # of variables & clauses suggests • Small crossword grid + medium-sized dictionary may turn into a big formula … but still a small puzzle at some level • Unit propagation does a lot of work for you • Clause learning picks up on the structure of the encoding • But some random SAT problems really are hard! • zChaff’s tricks don’t work so well here 600.325/425 Declarative Methods - J. Eisner

  3. Random 3-SAT • sample uniformly from space of all possible 3-clauses • n variables, l clauses • Which are the hard instances? • around l/n = 4.26 600.325/425 Declarative Methods - J. Eisner slide thanks to Henry Kautz (modified)

  4. Complexity peak is very stable … • across problem sizes • across solver types • systematic (last lecture) • stochastic (this lecture) The magic ratio 4.26 600.325/425 Declarative Methods - J. Eisner slide thanks to Henry Kautz (modified)

  5. Why 4.26? • Complexity peak coincides with solubility transition • l/n < 4.3 problems under-constrained and SAT • l/n > 4.3 problems over-constrained and UNSAT • l/n=4.3, problems on “knife-edge” between SAT and UNSAT 600.325/425 Declarative Methods - J. Eisner slide thanks to Henry Kautz (modified)

  6. That’s called a “phase transition” • Problems < 32˚F are like ice; > 32˚F are like water • Similar “phase transitions” for other NP-hard problems • job shop scheduling • traveling salesperson (instances from TSPlib) • exam timetables (instances from Edinburgh) • Boolean circuit synthesis • Latin squares (alias sports scheduling) • Hot research topic: • predict hardness of a given instance, & use hardness to control search strategy (Horvitz, Kautz, Ruan 2001-3) 600.325/425 Declarative Methods - J. Eisner slide thanks to Henry Kautz (modified)

  7. Methods in today’s lecture … • Can handle “big” random SAT problems • Can go up about 10x bigger than systematic solvers!  • Rather smaller than structured problems (espec. if ratio  4.26) • Also handle big structured SAT problems • But lose here to best systematic solvers  • Try hard to find a good solution • Very useful for approximating MAX-SAT • Not intended to find all solutions • Not intended to show that there are no solutions (UNSAT) 600.325/425 Declarative Methods - J. Eisner

  8. GSAT vs. DP on Hard Random Instances today (not today’s best algorithm) yesterday (not yesterday’s best algorithm) 600.325/425 Declarative Methods - J. Eisner slide thanks to Russ Greiner and Dekang Lin

  9. How’m I doin’? Repeat until satisfied Local search for SAT • Make a guess (smart or dumb) about values of the variables • Try flipping a variable to make things better • Algorithms differ on which variable to flip 600.325/425 Declarative Methods - J. Eisner

  10. Make a guess (smart or dumb) about values of the variables • Try flipping a variable that makes things better How’m I doin’? Repeat until satisfied Local search for SAT • Flip a randomly chosen variable? • No, blundering around blindly takes exponential time • Ought to pick a variable that improves … what? • Increase the # of satisfied clauses as much as possible • Break ties randomly • Note: Flipping a var will repair some clauses and break others • This is the “GSAT” (Greedy SAT) algorithm (almost) 600.325/425 Declarative Methods - J. Eisner

  11. Make a guess (smart or dumb) about values of the variables • Try flipping a variable that makes things better How’m I doin’? Repeat until satisfied Local search for SAT • Flip most-improving variable. Guaranteed to work? • What if first guess is A=1, B=1, C=1? • 2 clauses satisfied • Flip A to 0  3 clauses satisfied • Flip B to 0  all 4 clauses satisfied (pick this!) • Flip C to 0  3 clauses satisfied A v C A v B ~C v ~B ~B v ~A 600.325/425 Declarative Methods - J. Eisner example thanks to Monaldo Mastrolilli and Luca Maria Gambardella

  12. Make a guess (smart or dumb) about values of the variables • Try flipping a variable that makes things better How’m I doin’? Repeat until satisfied Local search for SAT • Flip most-improving variable. Guaranteed to work? • But what if first guess is A=0, B=1, C=1? • 3 clauses satisfied • Flip A to 1  3 clauses satisfied • Flip B to 0  3 clauses satisfied • Flip C to 0  3 clauses satisfied • Pick one anyway … (picking A wins on next step) A v C A v B ~C v ~B ~B v ~A 600.325/425 Declarative Methods - J. Eisner example thanks to Monaldo Mastrolilli and Luca Maria Gambardella

  13. Make a guess (smart or dumb) about values of the variables • Try flipping a variable that makes things better How’m I doin’? Repeat until satisfied Local search for SAT • Flip most-improving variable. Guaranteed to work? • Yes for 2-SAT: probability  1 within O(n2) steps • No in general: can & usually does get locally stuck • Therefore, GSAT just restarts periodically • New random initial guess Believe it or not, GSAT outperformed everything else when it was invented in 1992. 600.325/425 Declarative Methods - J. Eisner example thanks to Monaldo Mastrolilli and Luca Maria Gambardella

  14. Problems with Hill Climbing 600.325/425 Declarative Methods - J. Eisner slide thanks to Russ Greiner and Dekang Lin

  15. Problems with Hill Climbing • Local Optima (foothills): No neighbor is better, but not at global optimum. • (Maze: may have to move AWAY from goal to find (best) solution) • Plateaus: All neighbors look the same. • (15-puzzle: perhaps no action will change # of tiles out of place) • Ridge: going up only in a narrow direction. • Suppose no change going South, or going East, but big win going SE: have to flip 2 vars at once • Ignorance of the peak: Am I done? 600.325/425 Declarative Methods - J. Eisner slide thanks to Russ Greiner and Dekang Lin

  16. How to escape local optima? • Restart every so often • Don’t be so greedy (more randomness) • Walk algorithms: With some probability, flip a randomly chosen variable instead of a best-improving variable • WalkSAT algorithms: Confine selection to variables in a single, randomly chosen unsatisfied clause (also faster!) • Simulated annealing (general technique): Probability of flipping is related to how much it helps/hurts • Force the algorithm to move away from current solution • Tabu algs: Refuse to flip back a var flipped in past t steps • Novelty algs for WalkSAT: With some probability, refuse to flip back the most recently flipped var in a clause • Dynamic local search algorithms: Gradually increase weight of unsatisfied clauses 600.325/425 Declarative Methods - J. Eisner

  17. How to escape local optima? Lots of algorithms have been tried … Simulated annealing, genetic or evolutionary algorithms, GSAT, GWSAT, GSAT/Tabu, HSAT, HWSAT, WalkSAT/SKC, WalkSAT/Tabu, Novelty, Novelty+, R-Novelty(+), Adaptive Novelty(+), … 600.325/425 Declarative Methods - J. Eisner

  18. How do we generalize to MAX-SAT? (the “novelty” part) Adaptive Novelty+ (current winner) • with probability 1% (the “+” part) • Choose randomly among all variables that appear in at least one unsatisfied clause • else be greedier (other 99%) • Randomly choose an unsatisfied clause C • Flipping any variable in C will at least fix C • Choose the most-improving variable in C … except … • with probability p, ignore most recently flipped variable in C • The “adaptive” part: • If we improved # of satisfied clauses, decrease p slightly • If we haven’t gotten an improvement for a while, increase p • If we’ve been searching too long, restart the whole algorithm 600.325/425 Declarative Methods - J. Eisner

  19. WalkSAT/SKC (first good local search algorithm for SAT, very influential) • Choose an unsatisfied clause C • Flipping any variable in C will at least fix C • Compute a “break score” for each var in C • Flipping it would break how many other clauses? • If C has any vars with break score 0 • Pick at random among the vars with break score 0 • else with probability p • Pick at random among the vars with min break score • else • Pick at random among all vars in C 600.325/425 Declarative Methods - J. Eisner

  20. Simulated Annealing (popular general local search technique – often very effective, rather slow) • Pick a variable at random If flipping it improves assignment: do it. Else flip anyway with probability p = e-/T where •  = damage to score • What is p for =0? For large ? • T = “temperature” • What is p as T tends to infinity? • Higher T = more random, non-greedy exploration • As we run, decrease T from high temperature to near 0. 600.325/425 Declarative Methods - J. Eisner slide thanks to Russ Greiner and Dekang Lin (modified)

  21. Evolutionary algorithms (another popular general technique) • Many local searches at once • Consider 20 random initial assignments • Each one gets to try flipping 3 different variables(“reproduction with mutation”) • Now we have 60 assignments • Keep the best 20 (“natural selection”), and continue 600.325/425 Declarative Methods - J. Eisner

  22. 1 0 1 0 1 1 1 Parent 1 1 1 0 0 0 1 1 Parent 2 1 0 10 0 1 1 Child 1 Mutation Child 2 1 1 00 1 1 0 Sexual reproduction (another popular general technique – at least for evolutionary algorithms) • Derive each new assignment by somehow combining two old assignments, not just modifying one (“sexual reproduction” or “crossover”) Good idea? 600.325/425 Declarative Methods - J. Eisner slide thanks to Russ Greiner and Dekang Lin (modified)

More Related