400 likes | 537 Views
Randomness Extractors: Motivation, Applications and Constructions. Ronen Shaltiel University of Haifa. Outline of talk. Extractors as graphs with expansion properties Extractors as functions which extract randomness Applications Explicit Constructions.
E N D
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa
Outline of talk • Extractors as graphs with expansion properties • Extractors as functions which extract randomness • Applications • Explicit Constructions
An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=Nδ, M=exp( (log N)δ). Every vertex x on the left has D neighbors. the extractor is better when D is small. (e.g. D=polylog N) Convention: N=2n, M=2m, D=2d … {1,…,N}≈ {0,1}n E(x,1) .. E(x,D) Extractor graphs:Definition [NZ] N≈{0,1}n M≈{0,1}m D edges x
(K,ε)-Extractor: ∀set Xof size K the dist. E(X,U) ε-close to uniform. =>“expansion” property: ∀set Xofsize K, |Γ)x)| ≥ (1-ε)M. Distribution versus Set size Identify X with the uniform distribution on X Extractor graphs: expansion properties N≈{0,1}n M≈{0,1}m x X Γ(X) K (1-ε)M *A distribution P is ε-close to uniform if ||P-U||1≤ 2ε=> P supports 1-εelements.
X Γ(X) Extractors and Expander graphs N≈{0,1}n N≈{0,1}n N≈{0,1}n M≈{0,1}m D=2d edges X Γ(X) K K (1+δ)K (1-ε)M (1+δ)-Expander Extractor
X Γ(X) Extractors and Expander graphs N≈{0,1}n N≈{0,1}n N≈{0,1}n Allows constant degree Expands sets smaller than threshold K Absolute expansion: K -> (1+δ)K Balanced graph M≈{0,1}m X Γ(X) K K (1+δ)K (1-ε)M Unbalanced graph Relative expansion: K -> (1-ε)M K/N -> (1-ε) Requires degree log N Expands sets larger than threshold K (1+δ)-Expander Extractor
Outline of talk • Extractors as graphs with expansion properties • Extractors as functions which extract randomness • Applications • Explicit Constructions
We have access to distributions in nature: Electric noise Key strokes of user Timing of past events These distributions are “somewhat random” but not “truly random”. Paradigm: [SV,V,VV,CG,V,CW,Z]. Randomness Extractors Assumption for this talk: Somewhat random = uniform over subset of size K. Randomness Extractor The initial motivation: running probabilistic algorithms with “real-life” sources Successful Paradigm in CS: Probabilistic Algorithms. Probabilistic Algorithms/Protocols: Use an additional input stream of independent coin tosses. Helpful in solving computational problems. Where can we get random bits? Somewhat random random coins Probabilistic algorithm input output
seed Y Extractor random output Extractors as functions that use few bits to extract randomness • We allow an extractor to also receive an additional input of (very few) random bits. • Extractors use few random bits to extract many random bits from arbitrary distributions which “contain” sufficient randomness. Parameters: (function view) • Source length: n (= log N) • Seed length: d ~ O(log n) • Entropy threshold: k ~ n/100 • Output length: m ~ k • Required error: ε~ 1/100 source distribution X Randomness Definition: A (K,ε)-extractor is a function E(x,y) s.t. For every set. X of size K, E(X,U) is ε-close* to uniform. Lower bounds [NZ,RT]: seed length (in bits) ≥ log n Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extractsall the k=log K random bits in the source distribution. Explicit constructions: E(x,y) can be computed in poly-time.
Randomness Extractor seed Simulating probabilistic algorithms using weak random sources Goal: Run prob algorithm using a somewhat random distribution. Where can we get a seed? Idea: Go over all seeds. • Given a source element x. • ∀y compute zy= E(x,y) • Compute Alg(input,zy) • Answer majority vote. Seed=O(logn) => poly-time Explicit constructions. Somewhat random random coins Probabilistic algorithm input output
Outline of talk • Extractors as graphs with expansion properties • Extractors as functions which extract randomness • Applications • Explicit Constructions
Applications • Simulating probabilistic algorithms using weak sources of randomness [vN,SV,V,VV,CG,V,CW,Z]. • Constructing Graphs (Expanders, Super-concentrators) [WZ]. • Oblivious sampling [S,Z]. • Constructions of various pseudorandom generators [NZ,RR,STV,GW,MV]. • Distributed algorithms [WZ,Z,RZ]. • Cryptography [CDHK,L,V,DS,MST]. • Hardness of approximations [Z,U,MU]. • Error correcting codes [TZ].
Expanders that beat the eigenvalue bound [WZ] Goal: Construct low deg expanders with huge expansion. Line up two low degree extractors. ∀set X ofsize K, |Γ)x)| ≥ (1-ε)M > M/2. ∀sets X,X’ ofsize K X and X’ havecommon neighbour. • Contract middle layer. • Low degree (ND2/K) bipartite graph in which every set of size K sees N-K vertices. • Better constructions for large K [CRVW]. N≈{0,1}n N≈{0,1}n X’ X
Random walk variables v1..vD behave like i.i.d: ∀A of size ½M Hitting property: Pr[∀i: vi∊A]≤δ= 2-Ω(D). Chernoff style property: Pr[#i : vi∊A far from exp.] ≤ 2-Ω(D). # of random bits used for walk: m+O(D)=m+O(log(1/δ)) # of random bits for i.i.d. m∙D=m ∙O(log(1/δ)) Randomness efficient (oblivious) sampling using expanders Random walk on constant degree expander M≈{0,1}m v2 v3 v1 vD
Given parameters m,δ: Use E with K=M=2m, N=M/δand small D. Choose random x: m+log(1/δ) random bits. Set vi=E(x,i) Ext property⇒Hitting property ∀A of size ½M Call x badif E(x) inside A. # of bad x’s < K Pr[x is bad] < K/N= δ E(x,1) .. E(x,D) Randomness efficient (oblivious) sampling using extractors [S] N≈{0,1}n M≈{0,1}m D edges x (1-ε)M A bad x’s
An (oblivious) sampling scheme uses a random n bit string x to generated D random variables with Chrnoff style property. Thm: [Z] The derived graph is an extractor. Extractors oblvs Sampling v1 .. vD Every (oblivious) sampling scheme yields an extractor N≈{0,1}n M≈{0,1}m D=2d edges x
Outline of talk • Extractors as graphs with expansion properties • Extractors as functions which extract randomness • Applications • Explicit Constructions
Extractors from error correcting codes • Can construct extractors from error-correcting code [ILL,SZ,T]. • Short seed. • Extract one additional bit • Extractors that extract one additional bit List-decodable error-correcting codes • Extractors that extract many bits codes with strong list-recovering properties [TZ].
x x EC(x) EC(x) x EC(x)’ EC(x)’ noisy channel extremely noisy channel decoding encoding encoding x3 x1 x2 List-decodable error-correcting codes [S] 20% errors List decoding 49% errors • EC(x) is 20%-decodable if for every w there is a unique x s.t. EC(x) differs from w in 20% of positions. • EC(x) is (49%,t)-list-decodable if for every w there are at most t x’s s.t. EC(x) differs from w in 49% of positions. • There are explicit constructions of such codes.
Extractors from list-decodable error-correcting codes [ILL,T] • Thm: If EC(x)is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor. • Note: E outputs its seed y. Such an extractor is called “strong”. • E outputs only one additional output bit EC(x)y • There are constructions of list-decodable error correcting codes with |y|=O(log n). • Strong extractors with one additional bit List-decodable error correcting codes. • Strong extractors with many additional bits translate into very strong error correcting codes [TZ].
Extractors from list-decodable error-correcting codes: proof Thm: If EC(x)is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor. Proof: by contradiction. Let X be a distribution/set of size K s.t. E(X,Y)=(Y,EC(X)Y) is far from uniform. Observation: Y and EC(X)Y are both uniform. • They are correlated. • Exists P s.t. P(Y)=EC(X)Y with prob > ½+2ε.
Extractors from list-decodable error-correcting codes: proof II Thm: If EC(x)is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor. Exists P s.t. PrX,Y[P(Y)=EC(X)Y] > ½+2ε. By a Markov argument: For εKx’s in X PrY[P(Y)=EC(x)Y] > ½+ε. Think of P as a string Py=P(y). We have that P and EC(x) differ in ½-εcoordinates. Story so far: If E is bad then there is a string P s.t. for εKx’s P and EC(x) differ in few coordinates.
x EC(x) P=EC(x)’ noisy channel encoding x1 x2 x3 Extractors from list-decodable error-correcting codes: proof III Thm: If EC(x)is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor. Story so far: If E is bad then there is a string P s.t. for εKx’s P and EC(x) differ in ½-ε coordinates. List decoding 49% errors By list-decoding properties of the code: • # of such x’s < εK. Contradiction!
Roadmap • Can construct extractors from error-correcting code. • Short seed. • Output = Seed + 1. • Next: How to extract more bits. • General paradigm: Once you construct one extractor you can try to boost its quality.
Extracting more bits [WZ] Starting point: An extractor E that extracts only few bits. Idea: (X|E(X,Y)) contains randomness. We can apply E to extract randomness from (X|E(X,Y)). Need a “fresh” seed. E’(X;(Y,Y’))=E(X,Y),E(X,Y’) Extract more randomness. Use larger seed. X Z Z Z Extractor Y Y’ X New Extractor Y’ Y
Trevisan’s extractor: reducing the seed length Idea: Use few random bits to generate (correlated) seeds Y1,Y2,Y3… • Walk on expander? • Extractor? Works but gives small savings. Trevisan: use Nisan-Wigderson pseudorandom generator (based on combinatorial designs). [TZS,SU]: Use Y,Y+1,Y+2,... (based on the [STV] algorithm for list-decoding Reed-Muller code). X Extractor Y2 Y1 Y
The extractor designer tool kit • Many ways to “compose” extractors with themselves and related objects. • Arguments use “entropy manipulations” depend on “function view” of extractors. • Impact on other graph construction problems: • Expander graphs (zig-zag product) [RVW,CRVW]. • Ramsey graphs that beat the Frankl-Wilson construction [BKSSW,BRSW].
Entropy manipulations: composing two extractors [Z,NZ] Two independent sources X2 Z X1 Small Extractor Y’ Large Extractor Observation: Can compose a small ext. and a large ext. and obtain ext. which inherits small seed and large output. Paradigm: If given only one source try to convert it into two sources that are “sufficiently independent”.
seed Y Extractor random output Summary: Extractors are Graphs Functions M≈{0,1}m source distribution X X Γ(X) Randomness K=2k (1-ε)M
Conclusion • Unifying role of extractors: • Expanders, Oblivious samplers, Error correcting codes, Pseudorandom generators, hash functions… • Open problems: • More applications/connections. • The quest for explicitly constructing the optimal extractor. (Current record [LRVW]). • Direct and simple constructions. • Things I didn’t talk about: • Seedless extractors for special families of sources.
E(x)1 .. E(x)D Extractor graphs N≈{0,1}n M≈{0,1}m D=2d edges D=2d edges x x
Extractor graphs: expansion N≈{0,1}n M≈{0,1}m X Γ(X) K=2k (1-ε)M
Extractor random output Issues in a formal definition: 2. One extractor for all sources • Goal: Design one extractor function E(x) that works on all sufficiently high entropy distributions. • Problem: Impossible to extract even 1 bit from distributions with n-1 bits of entropy. • Have to settle for less! source distribution X Randomness {0,1}n x:E(x)=1 x:E(x)=0 Distribution X with entropy n-1 on which E(X) is fixed
seed Y Extractor random output Definition of extractors [NZ] • We allow an extractor to also receive an additional seed of (very few) random bits. • Extractors use few random bits to extract many random bits from arbitrary distributions with sufficiently high entropy. Parameters: • Source length: n • Seed length: d ~ O(log n) • Entropy threshold: k ~ n/100 • Output length: m ~ k • Required error: ε~ 1/100 source distribution X Randomness Definition: A (k,ε)-extractor is a function E(x,y) s.t. For every distribution X with min-entropy k, E(X,Y) is ε-close* to uniform. Lower bounds [NZ,RT]: seed length ≥ log n + 2log(1/ε) Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts k+d-2log(1/ε) bits. *A distribution P is ε-close to uniform if ||P-U||1≤ 2ε=> P supports 1-εelements.
An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=Nδ, M=exp( (log N)δ). Every vertex x on the left has D neighbors. E(x)=(E(x)1,..,E(x)D) the extractor is better when D is small. (e.g. D=polylog N) Convention: E(x,y) = E(x)y E(x)1 .. E(x)D Extractor graphs:Definition [NZ] N≈{0,1}n M≈{0,1}m D edges x
Extractor random output Issues in a formal definition: 1. Notion of entropy • The source distribution X must “contain randomness” • Necessary condition for extracting k bits: ∀x Pr[X=x]≤2-k • Dfn: X has min-entropyk if ∀x Pr[X=x]≤2-k • Example: flat distributions: X is uniformly distributed on a subset of size 2k. • Every X with min-entropy k is a convex combination of flat distributions. source distribution X Randomness {0,1}n 2k=|S|
x x’ noisy channel Noisy channels and error corrections Goal: Transmit messages using a noisy channel errors Guarantee: x’ differs from x in at most (say) 20% positions. Coding Theory: Encode x prior to transmission.