120 likes | 263 Views
Some Thoughts regarding Unconditional Derandomization. Oded Goldreich Weizmann Institute of Science. RANDOM 2010. Replacing Luca Trevisan. A different perspective (although I don’t know what Luca planned to say…). Focus on what I know (although I know little…)
E N D
Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010
Replacing Luca Trevisan A different perspective(although I don’t know what Luca planned to say…). Focus on what I know(although I know little…) Speculations rather than open problems I will focus on a new result by which “BPP=P if and only if there exists suitable pseudorandom generators”. I will clarify what I mean by all these terms, but don’t be too alert: I mean the standard. Warning: I’ll be using BPP and P to denote classes of promise problems.
BPP=P iff there exists “suitable” pseudorandom generators The term “pseudorandom generator” refers to a general paradigm with numerous incarnations (ranging from general-purpose PRGs (i.e., fooling any efficient observer) to special-purpose PRGs (e.g., pairwise independence PRGs)). The common themes (and differences) relate to (1) amount of stretching, (2) notion of “looking random”, and (3) complexity of (deterministic) generation (or stretching). For the purpose of derandomizing (e.g., BPP) it suffices to use PRGs that run in exponential-time (i.e., exponential in length of their input seeds). Their output should look random to linear-time observers (i.e., linear in length of PRG’s output). canonical derandomizers.
Canonical derandomizers (recap. and more/detailed) Canonical derandomizers (PRGs) also come in several flavors. In all, generation time is exponential (in seed’s length);the small variations refer to the exact formulation of the pseudorandomness condition and to the stretch function. The most standard formulation refers to all (non-uniform) linear-size circuits. Also standard is a uniform formulation: For any fixed polynomial p, no probabilistic p-time algorithm can distinguish the PRG’s output from a truly random string with gap greater than 1/p. We refer to this notion. Indeed, we shall focus on exponential stretch… (The PRG’s running time, in terms of its output length may be larger than p.)
Canonical derandomizers (a sanity check) Well-known: Using canonical derandomizers of exponential stretch we can effectively put BPP in P; that is,for every problem in BPP and every polynomial p, we obtain a deterministic poly-time algorithm such that no (prob.) p-time algorithm can find (except w. prob. 1/p) an input on which the deterministic algorithm errs. First, combine the randomized algorithm with the PRG to obtain an effectively equivalent randomized algorithm of logarithmic randomness complexity. Then, use straightforward derandomization. NEW: We “reverse” the foregoing connection, showing that if BPP is effectively in P, then one can construct canonical derandomizers of exponential stretch.
Reversing the PRG-to-derandomization connection Assume (for simplicity) that BPP=P (rather than only effectively so). We construct canonical derandomizers of exponential stretch. Note that a random function of exponential stretch has the desired pseudorandomness feature(w.r.t gap 1/p, we use a seed of length O(log p)). But we need an explicit (deterministic) construction. Idea: Just derandomize the above construction by using BPP=P. Problem: BPP=P refers to decision(al) problems, whereas we have at hand a construction problem (or a search problem). Solution:Reduce “BPP-search” problems to BPP, via a deterministic poly-time reduction that carefully implements the standard bit-by-bit process. (BPP as promise problem used here!)
A closer look at the construction (search) problem Recall: We assume that BPP=P, and construct canonical derandomizers of exponential stretch. The search problem at hand: Given 1n, find a set Sn of n-bit long strings such that any p(n)-time observer cannot distinguish a string selected uniformly in Sn from a totally random string. (W.r.t gap 1/p(n), where Sn has size poly(p(n))=poly(n).) Note: validity of solutions can be checked in BPP.BPP-search finding solutions in PPT + checking them in BPP. Reduce “BPP-search” problems to BPP, by extending the (current) solution prefix according to an estimate of the probability that a random extension of this prefix yields a valid solution. (The estimate is obtained via a query to a BPP oracle (of a promise type).)
Summary: canonical derandomizers are necessary (not merely sufficient) for placing BPP in P • THM (1st version of equivalence): The following are equiv. • For every polynomial p, BPP is p-effectively in P. • For every polynomial p, there exists a p-robust canonical derandomizer of exponential stretch. THM (2nd version of equivalence): BPP=P iff there exists atargetedcanonical derandomizer of exponential stretch. A problem is p-effectively solved by a function F if no probabilistic p-time algorithm can find an input on which F errs.A PRGs is p-robust if no probabilistic p-time algorithm can distinguish its output from a truly random one with gap greater than 1/p.Targeted auxiliary-input PRG (same aux to the PRG and its test).
Reflections on our construction of canonical derandomizers. Recall: We assumed that BPP=P, and constructed canonical derandomizers of exponential stretch. The construction of a canonical derandomizer may amount to a fancy diagonalization argument, where the “fancy” aspect refers to the need to estimate the average behavior of machines. Indeed, we saw that the construction of a suitable set Snreduces to obtaining such estimates, which are easy to get from a BPP oracle. One lesson is that BPP=P is equivalent to the existence of canonical derandomizers of exponential stretch. Another lesson is that derandomization may be more related to diagonalization than to “hard” lower bounds…
Time of speculations Derandomization maybe more related to diagonalization than to “hard” lower bounds… The common wisdom(for a decade) has been that derandomization requires proving lower bounds[IW98, IKW01, KI03]. IW98: BPP contained in i.o.-AvSubEXP implies BPP ≠ EXP.Of course, BPP SubEXP implies BPP EXP (by DTime Hierarchy). IKW01: BPP NSubEXP (or even less…) implies NEXP P/poly, and ditto for MA NSubEXP. (Actually, the focus is on the latter.)But this follows by “MA NSubEXP implies NEXP P/poly”, which in turn follows from “NEXP P/poly implies NEXP MA”. So this is a matter of “Karp-Lipton/[BFL]” + the NTime Hierarchy. KI03: derandomizing PIT yields B/A circuit lower bounds for NEXP.Ditto re whether such lower bounds are so much out of reach.
Additional thoughts (or controversies) Shall we see BPP=P proven in our lifetime? The (only) negative evidence we have is that this would imply circuit lower bounds in NEXP [IKW01, KI03]. But recall that we do know that NEXP P/poly if and only if NEXP MA, so is this negative evidence not similar to saying that derandomizing MA [or BPP] implies a “lower bound” on computing EXP [or NEXP] by MA [or BPP]? Some researchers attribute great importance to the difference between promise problems and “pure” decision problems. I have blurred this difference, and believe that whenever it exists we should consider the (general) promise problem version.
The End The slides of this talk are available at http://www.wisdom.weizmann.ac.il/~oded/T/bpp.ppt The paper (w.o. the bolder speculations) is available at http://www.wisdom.weizmann.ac.il/~oded/p_bpp.html