340 likes | 516 Views
Hardness Amplification within NP against Deterministic Algorithms. Why Hardness Amplification. Goal: Show there are hard problems in NP . Lower bounds out of reach. Cryptography, Derandomization require average case hardness. Revised Goal: Relate various kinds of hardness assumptions.
E N D
Hardness Amplification within NP against Deterministic Algorithms
Why Hardness Amplification • Goal: Show there are hard problems in NP. • Lower bounds out of reach. • Cryptography, Derandomization require average case hardness. • Revised Goal: Relate various kinds of hardness assumptions. • Hardness Amplification: Start with mild hardness, amplify.
Hardness Amplification Generic Amplification Theorem: If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z. NP, EXP, PSPACE P/poly, BPP, P
PSPACE versus P/poly, BPP • Long line of work: Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP). Yao, Nisan-Wigderson, Babai-Fortnow-Nisan-Wigderson, Impagliazzo, Impagliazzo-Wigderson1, Impagliazzo-Wigderson2, Sudan-Trevisan-Vadhan, Trevisan-Vadhan, Impagliazzo-Jaiswal-Kabanets, Impagliazzo-Jaiswal-Kabanets-Wigderson.
NP versus P/poly • O’Donnell. Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard. • Starts from average-case assumption. • Healy-Vadhan-Viola.
NP versus BPP • Trevisan’03. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.
NP versus BPP • Trevisan’05. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard. • BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. • Optimal up to .
Our resultsAmplification against P. Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard. Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard. Trevisan:1 - hardness to 7/8 + for PSPACE. Goldreich-Wigderson: Unconditional hardness for EXP against P. = 1/(log n)100 = 1/n100
Outline of This Talk: • Amplification via Decoding. • Deterministic Local Decoding. • Amplification within NP.
Outline of This Talk: • Amplification via Decoding. • Deterministic Local Decoding. • Amplification within NP.
Amplification via DecodingTrevisan, Sudan-Trevisan-Vadhan 1 0 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0 Encode Decode f g: Wildly hard Approx. to g f: Mildly hard
Amplification via Decoding. Case Study:PSPACE versus BPP. 1 0 1 1 0 0 1 0 1 • f’s table has size 2n. • g’s table has size 2n2. • Encoding in space n100. 1 0 1 1 0 0 Encode PSPACE f: Mildly hard g: Wildly hard
Amplification via Decoding. Case Study:PSPACE versus BPP. 1 0 0 1 1 0 0 1 1 • Randomized local decoder. • List-decoding beyond ¼ error. 1 0 1 1 0 0 Decode BPP f Approx. to g
Amplification via Decoding. Case Study: NP versus BPP. 1 0 1 1 0 0 1 0 1 • g is a monotone function M of f. • M is computable in NTIME(n100) • M needs to be noise-sensitive. 1 0 1 1 0 0 Encode NP f: Mildly hard g: Wildly hard
Amplification via Decoding. Case Study:NP versus BPP. • Randomized local decoder. • Monotone codes are bad codes. • Can only approximate f. 1 0 0 1 1 0 0 1 1 1 0 1 0 0 0 Decode BPP Approx. to f Approx. to g
Outline of This Talk: • Amplification via Decoding. • Deterministic Local Decoding. • Amplification within NP.
Deterministic Amplification. Deterministic local decoding? 1 0 0 1 1 0 0 1 1 1 0 1 1 0 0 Decode P
Deterministic Amplification. Deterministic local decoding? • Can force an error on any bit. • Need near-linear length encoding. • Monotone codes for NP. 1 0 0 1 1 0 0 1 1 2nn100 1 0 1 1 0 0 Decode 2n P
Deterministic Local Decoding … … up to unique decoding radius. • Deterministic local decoding up to 1 - from ¾ + agreement. • Monotone code construction with similar parameters. Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan] Open Problem:Go beyond Unique Decoding.
The ABNNR Construction. • Expander graph. • 2n vertices. • Degree n100.
The ABNNR Construction. • Expander graph. • 2n vertices. • Degree n100. 1 0 0 1 0
The ABNNR Construction. • Expander graph. • 2n vertices. • Degree n100. 1 0 0 1 1 0 1 0 • Start with a binary code with small distance. • Gives a code of large distance over large alphabet. 0 0 0 0 1 0 1 1 0 1 0 0
Concatenated ABNNR Codes. Inner code of distance ½. 1 0 0 1 0 1 0 1 1 1 1 0 1 0 1 1 0 0 1 0 • Binary code of distance ½. • [GI]: ¼ error, not local. • [T]: 1/8 error, local. 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 1 0 0 1 0 1 0 0 1 0 1 1 0 0
Decoding ABNNR Codes. 1 1 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0
Decoding ABNNR Codes. 1 0 0 1 1 1 0 0 1 • Decode inner codes. • Works if error < ¼. • Fails if error > ¼. 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 1 0 1 0 0 1 1 1 0 0
Decoding ABNNR Codes. 1 0 0 1 1 1 0 0 1 Majority vote on the LHS. [Trevisan]: Corrects 1/8 fraction of errors. 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 1 0 0 0
GMD decoding [Forney’67] c 2 [0,1] 1 0 0 1 1 1 0 0 1 • If decoding succeeds, error 2 [0, ¼]. • If 0 error, confidence is 1. • If ¼ error, confidence is 0. • c = (1 – 4). Could return wrong answer with high confidence… … but this requires close to ½.
GMD Decoding for ABNNR Codes. GMD decoding: Pick threshold, erase, decode. Non-local. Our approach: Weighted Majority. Thm: Corrects ¼ fraction of errors locally. 1 0 0 c1 1 1 1 0 0 1 0 0 1 c2 0 1 0 0 0 1 0 0 0 c3 0 0 1 0 0 0 0 0 1 c4 0 1 0 0 1 1 0 1 0 c5 0 1 1 1 0 0
GMD Decoding for ABNNR Codes. • Thm: GMD decoding corrects ¼ fraction of error. • Proof Sketch: • Globally, good nodes have more confidence than bad nodes. • Locally, this holds for most neighborhoods of vertices on LHS. 1 0 0 c1 1 0 0 1 c2 0 0 0 0 c3 0 0 0 1 c4 1 Proof similar to Expander Mixing Lemma. 0 1 0 c5 0
Outline of This Talk: • Amplification via Decoding. • Deterministic Local Decoding. • Amplification within NP. • Finding an inner monotone code [BOKS]. • Implementing GMD decoding.
The BOKS construction. 1 0 1 1 0 0 1 0 1 • T(x) : Sample an r-tuple from x, apply the Tribes function. • If x, y are balanced, and (x,y) > , (T(x),T(y)) ¼ ½. • If x, y are very close, so are T(x), T(y). • Decoding: brute force. 1 0 1 1 0 0 k kr x T(x)
GMD Decoding for Monotone codes. • Start with a balanced f, apply concatenated ABNNR. • Inner decoder returns closest balanced message. • Apply GMD decoding. • Thm: Decoder corrects ¼ fraction of error approximately. • Analysis becomes harder. 1 0 1 0 c1 1 0 1 1 0 c2 0 1 1 0 0 c3 0 0 1 1 0 c4 1 1 0 1 0 c5 0
GMD Decoding for Monotone codes. • Inner decoder finds the closest balanced message. • Assume 0 error: Decoder need not return message. • Good nodes have few errors, Bad nodes have many. • Thm: Decoder corrects ¼ fraction of error approximately. 1 0 1 0 c1 1 0 1 1 0 c2 0 1 1 0 0 c3 0 0 1 1 0 c4 1 1 0 1 0 c5 0
Beyond Unique Decoding… • Deterministic local list-decoder: • Set L of machines such that: • - For any received word • Every nearby codeword is computed by some M 2 L. • Is this possible? 1 0 0 1 1 0 0 1 1 Thank You!