280 likes | 425 Views
Quadratic Solvability. The P C P starting point. Overview. In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP . And even use it to prove a (very very weak...) PCP characterization of NP. Example: =Z 2 ; D=1 y = 0
E N D
Quadratic Solvability The PCP starting point
Overview • In this lecture we’ll present the Quadratic Solvability problem. • We’ll see this problem is closely related to PCP. • And even use it to prove a (very very weak...) PCP characterization of NP.
Example: =Z2; D=1 y = 0 x2 + x = 0 x2 + 1 = 0 Quadratic Solvability or equally: a set of dimension D total-degree 2 polynomials Def: (QS[D, ]): Instance: a set of n quadratic equations over with at most D variables each. Problem: to find if there is a common solution. 0 1 1 1
Solvability A generalization of this problem: Def: (Solvability[D, ]): Instance: a set of n polynomials over with at most D variables. Each polynomial has degree-bound n in each one of the variables. Problem: to find if there is a common root.
Solvability is Reducible to QS: w2 w3 w2 y2 x2 + x2 t + tlz + z + 1 = 0 w1 w1 = y2 w2 = x2 w3 = tl the parameters (D,) don’t change (assuming D>2)! Could we use the same “trick” to show Solvability is reducible to Linear Solvability?
QS is NP-hard Let us prove that QS is NP-hard by reducing 3-SAT to it: Def: (3-SAT): Instance: a 3CNF formula. Problem: to decide if this formula is satisfiable. (123)...(m/3-2m/3-1m/3) where each literal i{xj,xj}1jn
QS is NP-hard Given an instance of 3-SAT, use the following transformation on each clause: xi 1-xi xi xi ( i i+1 i+2 ) Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ] The corresponding instance of Solvability is the set of all resulting polynomials (which, assuming the variables are only assigned Boolean values, is equivalent)
QS is NP-hard In order to remove the assumption we need to add the equation for every variable xi: xi * ( 1 - xi ) = 0 This concludes the description of a reduction from 3SAT to Solvability[O(1),] for any field . What is the maximal degree of the resulting equations ?
QS is NP-hard According to the two previous reductions: 3-SAT Solvability QS
Gap-QS Def: (Gap-QS[D, ,]): Instance: a set of n quadratic equations over with at most D variables each. Problem: to distinguish between the following two cases: There is a common solution No more than an fraction of the equations can be satisfied simultaneously. YES NO
Gap-QS and PCP quadratic equations system Gap-QS[D,,] Def: LPCP[D,V, ] if there is a polynomial time algorithm, which for any input x, producesa set of efficient Boolean functions over variables of range 2V,each depending on at mostD variables so that: xL iff there exits an assignment to the variables, which satisfies all the functions xL iff no assignment can satisfy more than an -fraction of the functions. Gap-QS[D,,] PCP[D,log||,] For each quadratic polynomial pi(x1,...,xD), add the Boolean function i(a1,...,aD)pi(a1,...,aD)=0 values in the variables of the input system
Gap-QS and PCP • Therefore, every language which is efficiently reducible to Gap-QS[D,,] is also in PCP[D,log||,]. • Thus, proving Gap-QS[D,,] is NP-hard, also proves the PCP[D,log||,] characterization of NP. • And indeed our goal henceforth will be proving Gap-QS[D,,] is NP-hard for the best D, and we can.
degree-2 polynomials p1 p2 p3 . . . pn Gap-QS[n,,2/||] is NP-hard Proof: by reduction from QS[O(1),] Instance of QS[O(1),]: Satisfying assignment : i 000 . . . 0 Non-satisfying assignment : j 037 . . . 0
degree-2 polynomials p1 p2 p3 . . . pn Transformation: Gap-QS[O(1),] is NP-hard In order to have a gap we need an efficient degree-preserving transformation on the polynomials so that any non-satisfying assignment results in few satisfied polynomials: p1’p2’p3’. . . pm’ Non-satisfying assignment : j 024 . . . 3
Gap-QS[O(1),] is NP-hard For such an efficient degree-preserving transformation E it must hold that: Thus E is an error correcting code ! We shall now see examples of degree-preserving transformations which are also error correcting codes:
c1 . . . cm p p1 p2 ... pn c11 . . . c1m . . . . . . . . . cn1 . . .cnm p•c1 . . . p•cm = The linear transformation: multiplication by a matrix polynomials poly-time, if m=nc inner product a linear combination of polynomials scalars
c1 . . . cm e1 e2 ... en c11 . . . c1m . . . . . . . . . cn1 . . .cnm •c1 . . . •cm = The linear transformation: multiplication by a matrix the values of the polynomials under some assignment the values of the new polynomials under the same assignment a zero vector if =0n
What’s Ahead • We proceed with several examples for linear error correcting codes: • Reed-Solomon code • Random matrix • And finally even a code which suits our needs...
Using Reed-Solomon Codes • Define the matrix as follows: That’s really Lagrange’s formula in disguise... • One can prove that for any 0i||-1, (vA)i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=vi for all 0in-1. • Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/||. using multivariate polynomials we can even get =O(logn/||)
Using a Random Matrix Lem: A random matrix Anxm satisfies w.h.p: The fraction of zeros in the output vector
Using a Random Matrix Proof: (by the probabilistic method) Let v0nn. Because the inner product of v and a random vector is random: Hence, |{i : (vA)i = 0}| (denoted Xv) is a binomial random variable with parameters m and ||-1. For this random variable, we can compute the probability Pr[ Xv 2m||-1 ] (the probability that the fraction of zeros exceeds 2||-1 )
Using a Random Matrix The Chernoff bound: For a binomial random variable with parameters m and ||-1: Hence:
Using a Random Matrix Overall, the number of different vectors v is ||n Hence, according to the union bound, we can multiply the previous probability by the number of different vectors v to obtain a bound on the probability : The union bound: The probability for a union of events is Smaller then or equal to the sum of Their probabilities And this probability is smaller then 1 for: m=O(n||log||). Hence, for such m, a random matrix satisfies the lemma with positive probability.
Deterministic Construction Define a random matrix Anxm : Assume =Zp. Let k=logpn+1. (Assume w.l.o.g kN) Let Zpk be the dimension k extension field of . Associate each row with 1ipk-1 Hence, n=pk-1 Associate each column with a pair (x,y)ZpkZpk Hence, m=p2k
p2k pk-1 Deterministic Construction And define A(i,(x,y))=<xi,y> (inner product) <xi,y>
Analysis • For any vector vn, for any column (x,y)ZpkZpk, • The number of zeroes in vA where v0n + x,y: G(x)0 <G(x),y>=0 x,y: G(x)=0 • And thus the fraction of zeroes
Summary of the Reduction Given an instance {p1,...,pn} for QS[O(1),], We found a matrix A which satisfies v0n|{i : (vA)i = 0}| /m < 2||-1 Hence: {p1,...,pn} QS[O(1),] If and only if: {p1A,...,pnA} Gap-QS[O(n),,2||-1] This proves Gap-QS[O(n),,2||-1]is NP-hard !!
Hitting the Road This proves a PCP characterization with D=O(n) (hardly a “local” test...). Eventually we’ll prove a characterization with D=O(1) ([DFKRS]) using the results presented here as our starting point.