380 likes | 550 Views
Exact Model Counting: limitations of SAT-solver based methods. Paul Beame Jerry Li Sudeepa Roy Dan Suciu University of Washington [UAI 13], [ICDT 14]. Model Counting. Model Counting Problem: Given a Boolean formula/circuit F ,
E N D
Exact Model Counting: limitations of SAT-solver based methods Paul Beame Jerry Li Sudeepa Roy Dan Suciu University of Washington [UAI 13], [ICDT 14]
Model Counting • Model Counting Problem: Given a Boolean formula/circuit F, compute #F = #Models (satisfying assignments) of F Traditional cases of interest: F is CNF or DNF Recent: F is given by a small circuit from a class of simple circuits • Probability Computation Problem: Given F, and independent Pr(x), Pr(y), Pr(z), …, compute Pr(F)
Model Counting • #P-hard • Even for formulas where satisfiability is easy to check • Practical model counters can compute #F or Pr(F) for many CNF formulas of 100’s-10,000’s of variables.
Exact Model Counters for CNF [Birnbaumet. al.’99] • CDP • Relsat • Cachet • SharpSAT • c2d • Dsharp • … Search-based/DPLL-based (explore the assignment-space and count the satisfying ones) [Bayardo Jr. et. al. ’97, ’00] [Sang et. al. ’05] [Thurley ’06] [Darwiche ’04] Knowledge Compilation-based (compile F into a “computation-friendly” form) [Muiseet. al. ’12] [Survey by Gomes et. al. ’09] • Both techniques explicitly or implicitly • use DPLL-based algorithms • produce FBDD or Decision-DNNFcompiled forms • [Huang-Darwiche’05, ’07]
Compiled size vs Search time Desiderata • Compiled format makes model counting simple • Compiled format is concise • Compiled format is easy to find • Compiled size ≤ Search time • Even if construction of compiled form is only implicit • Can be exponential gap in terms of # of variables • e.g. an UNSAT formula has constant compiled size
Model Counters Use Extensions to DPLL • Caching Subformulas • Cachet, SharpSAT, c2d, Dsharp • Component Analysis • Relsat, c2d, Cachet , SharpSAT, Dsharp • Conflict Directed Clause Learning • Cachet, SharpSAT, c2d, Dsharp • Traces of • DPLL + caching + (clause learning) FBDD • DPLL + caching + components + (clause learning) Decision-DNNF • How muchdoescomponent analysishelp? • i.e. how muchsmaller are decision-DNNFsthanFBDDs?
Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems
DPLL Algorithms 5/8 • F: (xy) (xuw) (xuwz) x // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) 1 0 uwz y(uw) 7/8 3/8 z y 1 0 1 0 uw uw ¾ ¾ 0 1 u u 0 1 0 0 1 1 w 1 1 ½ w ½ w 1 w 1 1 0 1 0 0 1 0 1 1 0 1 0 Assume uniform distribution for simplicity
DPLL Algorithms 5/8 • F: (xy) (xuw) (xuwz) x 1 0 uwz y(uw) 7/8 3/8 z y Decision-Node • The trace is a • Decision-Tree for F 1 0 1 0 uw uw ¾ ¾ 0 1 u u 0 1 0 0 1 1 w 1 1 ½ w ½ w 1 w 1 1 0 1 0 0 1 0 1 1 0 1 0
Caching • F: (xy) (xuw) (xuwz) // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) x 1 0 uwz y(uw) z y 1 1 0 0 uw uw u u 0 1 • // DPLL with caching: • Cache F and Pr(F);look it up before computing 0 0 1 1 w w w 1 w 1 1 0 1 0 1 0 1 0
Caching & FBDDs • F: (xy) (xuw) (xuwz) x 1 0 • The trace is a decision-DAG for F • Everyvariable istested at most once on anypath • FBDD(Free BinaryDecisionDiagram) • or • 1-BP(Read Once Branching Program) uwz y(uw) z y 1 1 0 0 uw u 0 1 0 1 w w 1 1 0 1 0
Component Analysis • F: (xy) (xuw) (xuwz) // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) x 1 0 uwz y (uw) z y 1 1 0 0 uw // DPLL with component analysis (and caching): if F = G H where G and H have disjoint sets of variables Pr(F) = Pr(G)× Pr(H) u 0 1 0 1 w w 1 1 0 1 0
Components & Decision-DNNF Decision Node • F: (xy) (xuw) (xuwz) x AND Node 1 0 uwz y (uw) • The trace is a Decision-DNNF • [Huang-Darwiche ’05, ’07] • FBDD + “Decomposable” AND-nodes • (Twosub-DAGs do not share variables) z 1 0 y uw y u 1 1 0 0 1 w 1 0 w 1 1 0 How much power do they add? 1 0
New Conversion Theorem • Theorem: • decision-DNNFfor F of size N • FBDD for F of size Nlog N + 1 • If Fis a k-DNF or k-CNF, thenFBDDis of size Nk • Conversion algorithmruns in linear time in the size of its output
Decomposable Logic Decision Diagrams (DLDDs) • Generalization of Decision-DNNFs: • not just decomposable AND nodes • Also NOT nodes, decomposable binary OR, XOR, etc • sub-DAGs for each node are labelled by disjoint sets of variables Theorem: Conversion works even for DLDDs
Implications • Many previous exponential lower bounds for 1-BPs/FBDDs • 2(n) lower bounds for certain 2-DNF formulas based on combinatorial designs [Bollig-Wegener 00] [Wegener 02] • Our conversion theorem implies 2(n)bounds for decision-DNNF size and hence for SAT-solver based exact model counters
Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems
Applications of exact model counters • Finite model theory: • First order formulas over finite domains • Bayesian inference • Statistical relational models • Combinations of logic and probability • Probabilistic databases • Monotone restrictions of statistical relational models
Relational Databases Boolean query Q: x y AsthmaPatient(x) Friend (x, y) Smoker(y)
Probabilistic Databases • Tuples are probabilistic (and independent) • “Ann” is present with probability 0.3 • What is the probability that Q is true on D? • Assign unique variables to tuples • Boolean formula FQ,D = (x1y1z1) (x1y2z2) (x2y3z2) • Q is true on D FQ,D is true y1 x1 0.5 z1 0.9 0.3 y2 1.0 0.5 x2 z2 0.1 y3 0.7 Boolean query Q: x y AsthmaPatient(x) Friend (x, y) Smoker(y) Pr(x1) = 0.3
Probabilistic Databases • Query Probability Computation = Model Counting: • Compute Pr(FQ,D) given Pr(x1), Pr(x2), … • Monotone database query Q monotone k-DNFFQ,D • Boolean formula FQ,D = (x1y1z1) (x1y2z2) (x2y3z2) • Q is true on D FQ,D is true
A class of DB examples H1(x,y)=R(x)S(x,y) S(x,y)T(y) Hk(x,y)=R(x)S1(x,y) ... Si(x,y)Si+1(x,y) ... Sk(x,y)T(y) Dichotomy Theorem [Dalvi, Suciu 12] Model counting a Boolean combination of hk0,...,hkkis either • #P-hard, e.g. Hk, or • Polynomial time computable using “lifted inference” (inclusion-exclusion), e.g. (h30h32) (h30h33) (h31h33) • and there is a simple condition to tell which case holds hk0 hki hkk
New Lower Bounds Theorem: Any Boolean function f ofhk0,...,hkkthat depends on all of them requires FBDD(f) = 2(𝑛) which implies Decision-DNNF(f) = 2() Decision-DNNF(f) =2(𝑛/𝑘) iffis monotone. Corollary: SAT-solver based exact model counting requires 2(𝑛) time even on probabilistic DB instances with time algorithms using “lifted inference”.
“Lifted” vs “Grounded” Inference • “Grounded” inference • Work with propositional groundings of the first-order expressions given by the model • “Lifted” inference • Work with the first-order formulas and do higher level calculations Folklore sentiment: Lifted inference is strictly stronger than grounded inference Our examples give a first clear proof of this
Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems
Proof of Simulation Efficient construction Decision-DNNF FBDD Size N Size Nlog N+1 Size Nk • Decision-DNNF • that represents a k-DNF
Decision-DNNF FBDD ConvertdecomposableAND-nodes to decision-nodes whilerepresenting the same formula F
First attempt • G 1 G • H • H FBDD 0 0 1 1 0 0 1 Decision-DNNF FBDD • G and H do not share variables, so every variable is still • tested at most once on any path
But, what if sub-DAGs are shared? • Conflict! • G • H G g ’ h • H h g ’ 0 0 • H • G 1 0 0 1 1 1 0 0 Decision-DNNF
Obvious Solution: Replicate Nodes G G g ’ h • H • H No conflict can apply original idea 0 1 0 1 But, may need recursive replication Can have exponential blowup!
Main Idea: Replicate Smaller Sub-DAG Edges coming from other nodes in the decision-DNNF Smaller sub-DAG Each AND-node creates a private copy of its smaller sub-DAG Larger sub-DAG
Light and Heavy Edges Light Edge Heavy Edge Smaller sub-DAG Larger sub-DAG • Each AND-node creates a private copy of its smaller sub-DAG • Recursively, • each node u is replicated #times it is in a smaller sub-DAG • #Copies of u =#sequences of light edges leading to u
Quasipolynomial Conversion L = Max #light edgeson any path L ≤ log N N = Nsmall + Nbig ≥ 2 Nsmall ≥ ... ≥ 2L #Copies of each node ≤ NL ≤ Nlog N #Nodes in FBDD ≤ N. Nlog N We also show that our analysis is tight
Polynomial Conversion for k-DNFs • L = #Max light edges on any path ≤ k – 1 • #Nodes in FBDD ≤ N. NL = Nk
Summary • Quasi-polynomial conversion of any decision-DNNF or DLDD into an FBDD (polynomial for k-DNF or k-CNF) • Exponential lower bounds on model counting algorithms • Applications in probabilistic databases involving simple 2-DNF formulas where lifted inference is exponentially better than propositional model counting
Separation Results Exponential Separation Poly-size AND-FBDD or d-DNNF exists Exponential lower bound on decision-DNNF size • FBDD:Decision-DAG, each variable is tested once along any path • Decision-DNNF:FBDD + decomposable AND-nodes (disjoint sub-DAGs) Decision-DNNF d-DNNF AND-FBDD FBDD • AND-FBDD:FBDD + AND-nodes (not necessarily decomposable) • [Wegener’00] • d-DNNF: Decomposable AND nodes + OR-nodes with sub-DAGs not simultaneously satisfiable[Darwiche ’01, Darwiche-Marquis ’02]
Open Problems • A polynomial conversion of decision-DNNFs to FBDDs? • We have some examples we believe require quasipolynomial blow-up • What about SDDs [Darwiche 11] ? • Other syntactic subclasses of d-DNNFs? • Approximate model counting?
Thank You Questions?