1 / 28

The P C P starting point

Quadratic Solvability. The P C P starting point. Overview. In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP . And even use it to prove a (very very weak...) PCP characterization of NP. Example:  =Z 2 ; D=1 y = 0

dolores
Download Presentation

The P C P starting point

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quadratic Solvability The PCP starting point

  2. Overview • In this lecture we’ll present the Quadratic Solvability problem. • We’ll see this problem is closely related to PCP. • And even use it to prove a (very very weak...) PCP characterization of NP.

  3. Example: =Z2; D=1 y = 0 x2 + x = 0 x2 + 1 = 0 Quadratic Solvability or equally: a set of dimension D total-degree 2 polynomials Def: (QS[D, ]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to find if there is a common solution.  0  1 1  1

  4. Solvability A generalization of this problem: Def: (Solvability[D, ]): Instance: a set of n polynomials over  with at most D variables. Each polynomial has degree-bound n in each one of the variables. Problem: to find if there is a common root.

  5. Solvability is Reducible to QS: w2 w3 w2 y2 x2 + x2 t + tlz + z + 1 = 0 w1 w1 = y2 w2 = x2 w3 = tl the parameters (D,) don’t change (assuming D>2)! Could we use the same “trick” to show Solvability is reducible to Linear Solvability?

  6. QS is NP-hard Let us prove that QS is NP-hard by reducing 3-SAT to it: Def: (3-SAT): Instance: a 3CNF formula. Problem: to decide if this formula is satisfiable. (123)...(m/3-2m/3-1m/3) where each literal i{xj,xj}1jn

  7. QS is NP-hard Given an instance of 3-SAT, use the following transformation on each clause: xi 1-xi xi  xi ( i  i+1  i+2 ) Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ] The corresponding instance of Solvability is the set of all resulting polynomials (which, assuming the variables are only assigned Boolean values, is equivalent)

  8. QS is NP-hard In order to remove the assumption we need to add the equation for every variable xi: xi * ( 1 - xi ) = 0 This concludes the description of a reduction from 3SAT to Solvability[O(1),] for any field . What is the maximal degree of the resulting equations ?

  9. QS is NP-hard According to the two previous reductions: 3-SAT Solvability QS

  10. Gap-QS Def: (Gap-QS[D, ,]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to distinguish between the following two cases: There is a common solution No more than an  fraction of the equations can be satisfied simultaneously. YES NO

  11. Gap-QS and PCP quadratic equations system Gap-QS[D,,] Def: LPCP[D,V, ] if there is a polynomial time algorithm, which for any input x, producesa set of efficient Boolean functions over variables of range 2V,each depending on at mostD variables so that: xL iff there exits an assignment to the variables, which satisfies all the functions xL iff no assignment can satisfy more than an -fraction of the functions. Gap-QS[D,,] PCP[D,log||,] For each quadratic polynomial pi(x1,...,xD), add the Boolean function i(a1,...,aD)pi(a1,...,aD)=0 values in the variables of the input system

  12. Gap-QS and PCP • Therefore, every language which is efficiently reducible to Gap-QS[D,,] is also in PCP[D,log||,]. • Thus, proving Gap-QS[D,,] is NP-hard, also proves the PCP[D,log||,] characterization of NP. • And indeed our goal henceforth will be proving Gap-QS[D,,] is NP-hard for the best D,  and  we can.

  13. degree-2 polynomials p1 p2 p3 . . . pn Gap-QS[n,,2/||] is NP-hard Proof: by reduction from QS[O(1),] Instance of QS[O(1),]: Satisfying assignment : i 000 . . . 0 Non-satisfying assignment : j 037 . . . 0

  14. degree-2 polynomials p1 p2 p3 . . . pn Transformation: Gap-QS[O(1),] is NP-hard In order to have a gap we need an efficient degree-preserving transformation on the polynomials so that any non-satisfying assignment results in few satisfied polynomials: p1’p2’p3’. . . pm’ Non-satisfying assignment : j 024 . . . 3

  15. Gap-QS[O(1),] is NP-hard For such an efficient degree-preserving transformation E it must hold that: Thus E is an error correcting code ! We shall now see examples of degree-preserving transformations which are also error correcting codes:

  16. c1 . . . cm p p1 p2 ... pn c11 . . . c1m . . . . . . . . . cn1 . . .cnm p•c1 . . . p•cm  = The linear transformation: multiplication by a matrix polynomials poly-time, if m=nc inner product a linear combination of polynomials scalars

  17. c1 . . . cm  e1 e2 ... en c11 . . . c1m . . . . . . . . . cn1 . . .cnm •c1 . . . •cm  = The linear transformation: multiplication by a matrix the values of the polynomials under some assignment the values of the new polynomials under the same assignment a zero vector if =0n

  18. What’s Ahead • We proceed with several examples for linear error correcting codes: • Reed-Solomon code • Random matrix • And finally even a code which suits our needs...

  19. Using Reed-Solomon Codes • Define the matrix as follows: That’s really Lagrange’s formula in disguise... • One can prove that for any 0i||-1, (vA)i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=vi for all 0in-1. • Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/||. using multivariate polynomials we can even get =O(logn/||)

  20. Using a Random Matrix Lem: A random matrix Anxm satisfies w.h.p: The fraction of zeros in the output vector

  21. Using a Random Matrix Proof: (by the probabilistic method) Let v0nn. Because the inner product of v and a random vector is random:  Hence, |{i : (vA)i = 0}| (denoted Xv) is a binomial random variable with parameters m and ||-1. For this random variable, we can compute the probability Pr[ Xv  2m||-1 ] (the probability that the fraction of zeros exceeds 2||-1 )

  22. Using a Random Matrix The Chernoff bound: For a binomial random variable with parameters m and ||-1: Hence:

  23. Using a Random Matrix Overall, the number of different vectors v is  ||n Hence, according to the union bound, we can multiply the previous probability by the number of different vectors v to obtain a bound on the probability : The union bound: The probability for a union of events is Smaller then or equal to the sum of Their probabilities And this probability is smaller then 1 for: m=O(n||log||). Hence, for such m, a random matrix satisfies the lemma with positive probability. 

  24. Deterministic Construction Define a random matrix Anxm : Assume =Zp. Let k=logpn+1. (Assume w.l.o.g kN) Let Zpk be the dimension k extension field of . Associate each row with 1ipk-1 Hence, n=pk-1 Associate each column with a pair (x,y)ZpkZpk Hence, m=p2k

  25. p2k pk-1 Deterministic Construction And define A(i,(x,y))=<xi,y> (inner product) <xi,y>

  26. Analysis • For any vector vn, for any column (x,y)ZpkZpk, • The number of zeroes in vA where v0n + x,y: G(x)0  <G(x),y>=0 x,y: G(x)=0 • And thus the fraction of zeroes 

  27. Summary of the Reduction Given an instance {p1,...,pn} for QS[O(1),], We found a matrix A which satisfies v0n|{i : (vA)i = 0}| /m < 2||-1 Hence: {p1,...,pn}  QS[O(1),] If and only if: {p1A,...,pnA}  Gap-QS[O(n),,2||-1] This proves Gap-QS[O(n),,2||-1]is NP-hard !!

  28. Hitting the Road This proves a PCP characterization with D=O(n) (hardly a “local” test...). Eventually we’ll prove a characterization with D=O(1) ([DFKRS]) using the results presented here as our starting point.

More Related