260 likes | 303 Views
SAT-solving. An old AI technique becomes very popular in modern A.I. O verview:. Unit propagation SAT-solving. Basics on SAT. Local Search SAT-solving. What is SAT-solving?. Given KR: a set of propositional formula’s. Find a model for KR. More specifically:.
E N D
SAT-solving An old AI technique becomes very popular in modern A.I.
Overview: Unit propagation SAT-solving Basics on SAT Local Search SAT-solving
What is SAT-solving? • Given KR: a set of propositional formula’s • Find a model for KR. More specifically: • Let X1, X2, …, Xn be all the variables in KR, • Find an assignment I; Xi -> {T,F} , for i=1,n , such that all formula’s in KR become true. Useful ?
X1,1 X1,2 X1,3 Representation: X2,1 X2,2 X2,3 Example: 3-queens X3,1 X3,2 X3,3 represents: there is a queen on i,j Xi,j At least one queen on each row: Xi,1 Xi,2 Xi,3 (i=1,3) At most one queen on each row: Xi,1 ~Xi,2 ~Xi,3 ,Xi,2 ~Xi,1 ~Xi,3 , Xi,3 ~Xi,1 ~Xi,2 (i=1,3) Plus: similar formulas for columns and diagonals. 33 formula’s ! Generalisation to q-queens ? Very many formula’s !
Representation: - For every employee, i, - For every shift in the day, j, - For every day in the month, k : Example: personel rostering represents: i works on shift j of day k Xi,j,k At least one person works on every shift of every day: Xi,j,k (j=1,5, k=1,30) i=1,35 An interpretation assigns to each Xi,j,k: true or false Is a personel assignment ! Often this is more elegantly done with assignments to {0,1} and sums instead of Generalise to higher numbers than just 1
SAT-solving and NP-completeness • SAT-solving: one of the first identified as NP-complete Means: all other NP-problems are technically equivalent with this problem • If you find a P-algorithm for SAT: you get a P-algorithm for all the others. • Also means: most problems can be encoded as SAT-problems and solved using SAT-techniques.
Although NP(-complete) • Has led to: • Many areas in CS and AI convert problems to SAT • Then use SAT-solvers. But why practically useful? Very efficient heuristic approaches exist that work well on certain classes of problems.
Compute a finite grounding of the predicate logic theory. • Marcus example: 2 constants: Marcus and Ceasar. Example: Automated Reasoning 1.man(Marcus) is ground 2.Pompeian(Marcus) is ground 3. x Pompeian(x) Roman(x) Pompeian(Marcus) Roman(Marcus) Pompeian(Ceasar) Roman(Ceasar) 4.ruler(Caesar) is ground x Roman(x) loyal_to(x,Caesar) hates(x,Caesar) 5. Roman(Marcus) loyal_to(Marcus,Caesar) hates(Marcus,Caesar) Roman(Ceasar) loyal_to(Ceasar,Caesar) hates(Ceasar,Caesar) Ect. ….
Example: continued • Ground predicate logic is equivalent to propositional ! Example: Pompeian(Marcus) converts to Pompeian_Marcus loyal_to(Marcus,Caesar) converts to loyal_to_Marcus_Caesar • Add the propositional version of the negation of the theorem ~loyal_to(Marcus,Caesar) converts to loyal_to_Marcus_Caesar • The theorem follows if and only if this propositional KB in unsatisfiable ! SAT-solving
Idea: p q p q push all ~ as deep as possible apply distributivity of and p q q p q ~p • Every formula is equivalent to a formula of the form: (A1 ... An) (B1 … Bm) … (C1 … Ck) Conjunctive Normal Form • where all Ai, Bi, …, Ci are either atomic or ~atomic. • SAT-solving will work on a collection of disjunctions: X1 … Xn ~Y1 … ~Ym
Naive SAT-solving Depth-first left-to-right enumeration of all interpretations
X~Y~W, ~XYZ, ~XW X=F X=T T~Y~W, FYZ, FW F~Y~W, TYZ, TW SAT - Standard backtracking Y=T Y=F Y=T Y=F FZ, W F~W, T~W TZ, W Success W=T W=F Z=F Z=T Z=F Z=T F T F, W T, W T, W F, W Fail Fail Fail Success W=T W=F W=T W=F T F T F Success Fail Success Fail
NaiveSAT(i) Naïve SAT-solving algorithm Form is a CNF-formula with variables X1, X2, ...Xn S:= { Dj | Dj is a disjunction in Form} ForTruth = T, Fdo Xi := Truth; Remove all Dj containing T from S; Remove all F and Ffrom all Dj; If no Dj in S is equal to Fthen IfS= {}then return( X1, X2, … , Xn); Elsei := i + 1; NaiveSAT( i); i := i - 1; End-For Call: NaiveSAT(1)
Davis-Putman (1960):unit propagation The basis for VERY efficient SAT-solvers An early form of Dynamic Search Rearrangement
1. Dealing with pure symbols • A variable Xi is pure if it only appears with one sign in all disjunctions. • Assign a pure variable the value that makes it true everywhere (don’t consider the other assignment). Example: S = { X ~Y , ~Y ~Z, Z X } X is pure: only occurs positive Y is pure: only occurs negative Z is not pure.
X~Y~W, ~XYZ, ~XW Z is pure Z=T SAT + treating pure symbols X~Y~W, ~XY T, ~XW Y is pure Y=F XT~W, ~XW X is pure X=F T W Success
We get a _much_ smaller search space ! The order of choosing variables has become dynamic. Effect of dealing with pure symbols: But we are no longer considering all assignments. • Yet: we do not loose completeness: • If another solution exist, then the current assignment is ok too. • + If unsatisfiable: this will fail too. But not first-fail based.
1. Unit propagation • A disjunction is unit if it only contains one variable Xi. • Assign a unit the value that makes it true (don’t consider the other assignment). Example: S = { X ~Y , ~Y, X } ~Y and X are units
X~Y~W, ~XYZ, ~XW X=F X=T T~Y~W, FYZ, FW F~Y~W, TYZ, TW SAT – with unit propagation unit ! W=T Y=T Y=F YZ, T F~W T~W unit ! W=F Success Y=T Y=F T TZ, unit ! FZ Success Success Z=T T Success
Smaller search space again. Many more optimzations in real SAT-solvers! Effect of unit propagation But example is not well suited for unit propagation. • Obviously: combine both optimizations! • - Components analysis. • - More Dynamic Search Rearrangement • Eg.: take most frequent variable first • - Intelligent backtracking, Indexing, …
Marcus as SAT-solving: • Part of the grounding of Marcus-example • plus the negation of the theorem: man_Marcus, ruler_Caesar, try_assassinate_Marcus_Ceasar, loyal_to_Marcus_Caesar, ~man_Marcus ~ruler_Ceasar ~try_assassinate_Marcus_Ceasar ~local_to_Marcus_Ceasar, ... try_assassinate_Marcus_Ceasar = T man_Marcus = T ruler_Caesar = T loyal_to_Marcus_Caesar = T 4 unit propagations: T, T, T, T, F F F F, ... Fails ! Theorem proved.
SAT-solvingby Local Search WalkSAT algorithm
Local Search representation: States are n-tuples: (T,F,F,T, …,F) Xi , i=1,n propositional variables Dj, j=1,m the disjunctions in the CNF Neighbors: flip one truth value of an Xi in failing Dj Objective function: the number of Dj ‘s that evaluate to F To avoid local minima: probabilistically do Not take the best flip = an interpretation for the Xi ‘s Find the global minimum
The WalkSAT algorithm: Max_flip:= some number; P:= some probability; S:= { Dj | Dj disjunction}; State:= some interpretation for X1, X2, ...Xn; Fori= 1 to Max_flipdo IF all Dj’s are true in StateThenreturnState; Else Disj:= random Dj that is false under State; With probability P flip random Xi in Dj; Else flip the Xi of Dj that minimizes false Dj’s; End-For Report Failure
Evaluation: • Unclear whether optimized SAT-solving or Local Search is better. • but today more people are using local search • Efficiency of methods depends on underconstraint versus overconstraintness of problems • see Norvig & Russel for a discussion
What about Disjunctive Normal From? • DNF: a dual representation for propositional formulas: (A1 ... An) (B1 … Bm) … (C1 … Ck) • Satisfiability of DNF can be checked in linear time! • Find 1 conjunction that does not contain a variable and the negation of that variable. But: conversion to DNF requires exponential time and exponential space ! • CNF and DNF are dual: so conversion to CNF is also exponential !!! So why do we still prefer CNF ?? Think about it.