1 / 24

New Coins from old: Computing with unknown bias

Explore efficiently computing with unknown bias using von Neumann extractors. Learn to simulate complex functions via finite and pushdown automata. Discover the theory of exact simulation and the computability of real functions. Can pushdown automata simulate algebraic functions? Dive into the theory of rationality and explore the implications of block simulation. Unravel the connection between rationality and simulation by finite automata.

Download Presentation

New Coins from old: Computing with unknown bias

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. New Coins from old: Computing with unknown bias Elchanan Mossel, U.C. Berkeley mossel@stat.berkeley.edu, http://www.cs.berkeley.edu/~mossel/ Joint work with Yuval Peres, U.C. Berkeley peres@stat.berkeley.edu, http://www.stat.berkeley.edu/~peres/ Supported by Microsoft Research and the Miller institute

  2. von Neumann extractor (1951) Given a sequence of i.i.d. coins (p,1-p) coins want to toss a fair coin. 0 < p < 1 is unknown. Want Efficient in randomness (preserves entropy) Computationally simple (finite automaton) Efficient in time (small expected running time)

  3. von Neumann extractor (1951) P[01] = p(1-p) = P[10]. Map 01  0, 10  1 and delete00,11. Properties Linear time. Rate p(1-p) (compared to H(p)). Easy to generalize to Markov chains. Can implement via finite automata.

  4. Since von Neumann In information theory: extracting (1 – ε) of entropy Elias (72) – block construction. Peres (92) – iterative construction. In computer science, named extractors. General distributions with bound on (min) entropy. Extra randomness needed. Nearly optimal constructions in recent years.

  5. Our model Input is i.i.d. (p,1-p) coins, where 0 < p < 1 is unknown. Want to Output(f(p), 1 – f(p)) coin. Simulation power? Unbounded (infinite memory), Turing machines, Pushdown automata, Finite automata. Asked byS. Asmussen and J. Propp.

  6. Our model Keane & O’Brien : If there are no computational restriction can simulate any continuous function f : (0,1)  (0,1). Ergodic theory techniques. Need min(f(x),1-f(x)) > min(x,1-x)n, for some n.

  7. Coins via finite and pushdown automata Which functions can be simulated via finite automata? Which functions can be simulated via push-down automata? Examples: f(p) = p2 ? f(p) = p/2 ? f(p) = 2p for 0 < p < 1/4 ? f(p) = p ? f(p) = p2 / (p2 + (1 – p)2) ? f(p) = π/6 ?

  8. Exact simulation, computability, etc. Theory of exact simulation: simulating complicated distributions from simples ones. Here both are simple. But, some examples: Simulating percolation configuration on the triangular lattice from a configuration on the square lattice with same distance 2 connectivity functions (p unknown!). Given a sequence {yi AND zi} where yi, zi are i.i.d. with unkown mean p, find {xi} i.i.d. with mean p. Theory of computability – the computation of real functions. Trivial : for every computable constant 0 < q < 1, the function f(p) = q can be simulated via a Turing machine.

  9. Coins via finite automata

  10. Coins via finite automata • f(p) = p2 V • f(p) = p/2 V • f(p) = 2p for 0 < p < 1/4 X • f(p) = p X • f(p) = p2 / (p2 + (1 – p)2) V • f(p) = π/6 X

  11. Coins via pushdown automata • f(p) = 2p for 0 < p < 1/4 ?? • f(p) = p V • f(p) = π/6 X Open problem: Does a converse hold? If f :(0,1)  (0,1) is algebraic over Q, does there exist a push down automata simulating f?

  12. Pushdown automaton simulating p • Take a random walk on the ladder with P[up/down] = (1 – p)/2, P[left/right] = p. • Let t be the probability starting at (0,1) that the 1sthitting of level 0 is (0,0). • t = (1 – p)/2 + p (1 – t) + (1 – p)(t2 + (1-t)2)/2 • Can simulate the random walk with a push down automaton and t = (1 - p)/(1-p) • With another coin toss can get p ½ - p/2 ½ - p/2 p ½ - p/2 ½ - p/2

  13. Finite automaton implies rationality

  14. Finite automaton implies rationality • Proof 1: • Let F(p; s) be the probability to stop at 1 given that the current state is s. • F(p; s) = p F(p; δ(s,1)) + (1-p) F(p; δ(s,0)), and f(p) = F(p; s0). • Equations determine F by maximum principle for harmonic functions on directed graphs. • By Cramer’s rule F(p) = g(p) / h(p).

  15. Finite automaton implies rationality • Proof 2 (Chomsky-Schützenberger): • If L is a regular language, then is a rational function in the non-commutative variables x0 and x1. • Let L be the language where the automaton stops at 1. • Looking at the homomorphism 0  1-p, 1  p, we see that is a rational function.

  16. Algebraic properties of Pushdown automata • Our results for push-down automata do not follow from Chomsky-Schützenberger theorem. • Instead, we prove that f(p) is determined by a set of polynomial equations. • The proof uses the fact that bounded harmonic functions on recurrent infinite graphs are determined by boundary values. • Then we invoke the following result due to Hillar (2002).

  17. Hillar 2002:

  18. Rationality implies simulation by finite automata Definition:block simulation of f, is given by • A0, A1 disjoint subsets of {0,1}k, A' = {0,1}k \ (A0  A1), and the following procedure. • Read a k bit string w. • For i=1,2, if w Ai, output i. • Otherwise, discard w and reads a new k bit string. • Block simulation  automata simulation, and

  19. Rationality implies simulation by finite automata Already seen I  II  III

  20. Rationality implies simulation by finite automata Need to show: If f(p) = g(p)/h(p), where g,h  Z[p], and 0 < f(p) < 1, then f(p) is block simulated. Easier to show: Remark: claim doesn’t cover (p2 – 2p(1-p) + 2(1-p)2)/2

  21. Rationality implies simulation by finite automata Proof: 1…10…0y0…yr if 0  y0…yr  di 1 i k-i 1…10…0y0…yr if di y0…yr  ei 0 i k-i 1…10…0y0…yr if ei y0…yr X i k-i

  22. The general case

  23. The general case • Remark:The lemma reduces the general case to the easy case. • Proof of Lemma: • f(p) = D(p)/E(p), where D, E  Z[p] have positive values. • Write D and E as homogenous polynomials in p, 1 – p: D(p) = δ(p,1-p) and E(p) = ε(p,1-p). • δ(p,1-p), ε(p,1-p) and (ε – δ)(p,1-p) are positive for all 0 < p < 1,  for large enough n all the coefficients of d(p,q) = (p+q)n δ(p,q), e(p,q) = (p+q)n ε(p,q), and (e – d)(p,q) = (p+q)n (ε – δ)(p,q) are positive. - Write f(p) = δ(p,1-p)/ε(p,1-p).

  24. Open problems • Can every algebraic function f :(0,1)  (0,1) be simulated via a pushdown automaton? • For rational functions f, what is the minimal size of automaton simulating f? • The best bound known in Polya’s Theorem for f is (Powers and Reznick 2002). • For a rational function f, does there exist a finite automaton which extracts almost all entropy?

More Related