1 / 25

Data Streams and Applications in Computer Science

Data Streams and Applications in Computer Science. David Woodruff IBM Almaden Presburger lecture, ICALP, 2014. Thanks to my advisors. Prof. Ron Rivest. Prof. Piotr Indyk. Prof. Andy Yao. Thanks for your mentorship and research advice, and

lev-pugh
Download Presentation

Data Streams and Applications in Computer Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Streams and Applications in Computer Science David Woodruff IBM Almaden Presburger lecture, ICALP, 2014

  2. Thanks to my advisors Prof. Ron Rivest Prof. Piotr Indyk Prof. Andy Yao Thanks for your mentorship and research advice, and early guidance on a path in theoretical computer science

  3. and my amazing summer interns Yi Li Arnab Bhattacharyya Jelani Nelson Huy Nguyen Marco Molinaro Eric Price Grigory Yaroslavtsev

  4. and my awesome collaborators! in the theory group at IBM and throughout the world…

  5. My current research interests • Communication Complexity • Data Stream Algorithms and Lower Bounds • Graph Algorithms • Machine learning • Numerical Linear Algebra • Sketching • Sparse Recovery

  6. Talk Outline • Data Stream Model and Sample Results • Distinct Elements • Frequency Moments • Characterization of Algorithms • Connections to Other Areas • Compressed Sensing • Linear Algebra • Machine Learning

  7. Data Streams • A data stream is a sequence of data that is too large to be stored in available memory • Examples • Internet search logs • Network Traffic • Financial transactions • Sensor networks • Scientific data streams (astronomical, genomics, physical simulations)…

  8. Streaming Model … • Stream of elements a1, …, am each in {1, …, n} • Single or small number of passes over the data • Algorithms should work for any ordering of elements • Almost all algorithms are randomized and approximate • Usually necessary to achieve efficiency • Randomness is in the algorithm, not the input • Goals: minimize space complexity (in bits), processing time 4 3 7 3 1 1 0

  9. Vector Interpretation Stream: 8 2 1 9 1 9 2 4 4 9 4 2 5 4 2 5 8 5 2 5 Vector x: 1 2 3 4 5 6 7 8 9 • Think of x as an n-dimensional vector • Initially, x = 0n • Insertion of i is interpreted as xi = xi + 1 • Output an approximation to f(x) with high probability

  10. (1) Distinct Elements • Streaming model originated in work of Flajolet and Martin, ‘85 • Studied the distinct elements question • # of distinct elements, denoted F0, is |{i | xi > 0}| • Output a number Z with F0· Z · (1+ε) F0 with 99% probability • Can we do better than just storing all the coordinates of x? • Yes, and tight bounds are known [Indyk,W],[W],[Kane,Nelson,W]: £(1/ ε2 + log n) bits of space, O(1) processing time • Connections: to prove the tight lower bound, the gap-hamming communication problem was introduced

  11. y 2 {0,1}n x 2 {0,1}n Gap-Hamming Problem • Promise: Hamming distance satisfies Δ(x,y) > n/2 + εn or Δ(x,y) < n/2 - εn • Lower bound of Ω(ε-2) for randomized 1-way communication [Indyk, W], [W], [Jayram, Kumar, Sivakumar] • Same for 2-way communication [Chakrabarti, Regev] • Applications: in information complexity, functional monitoring, embeddings, linear algebra, differential privacy, sparsifiers, … • (Andoni, Brody, Clarkson, de Wolf, Jayram, Krauthgamer, McGregor, Mironov, Pitassi, Reingold, Sherstov, Talwar, Vadhan, Vidick, W, Zhang…)

  12. (2) Frequency Moments • Streaming model revived in work of Alon, Matias, and Szegedy, ’96 [AMS] • Consider more general turnstile streaming model [coined by Muthukrishnan] • positive and negative updates, so xi = xi + 1 or xi = xi – 1 • summarize statistics of difference x-y of two streams of insertions

  13. Frequency Moments • [AMS] study frequency moments Fp = sumi=1n |xi|p , or equivalently lp-norms • Summarize skewness of an empirical distribution • F2 used in computing self-join sizes, geometry and linear algebra • F1 used for measuring distance between distributions, and in “robust” algorithms (regression, subspace approximation) Flat Skewed

  14. Frequency Moments • Output a number Z with Fp· Z · (1+ε) Fp with 99% probability • Near-tight bounds known (Andoni, Bar-Yossef, Braverman, Chakrabarti, Coppersmith, Cormode, Ganguly, Gronemeier, Indyk, Jayram, Kane, Krauthgamer, Kumar, Li, Nelson, Porat, Sivakumar, Sun, W, …) Any guesses on how the space bounds depend on p?

  15. Frequency Moments • F2 is the “breaking point” • Fp for p · 2 doable in O~(1) bits of space • Fp for p > 2 requires £~(n1-2/p) bits of space • Algorithms achieve O~(1) processing times • Connections: “sub-sampling + heavy hitters” technique for the upper bound • Used in many data stream, embedding, and linear algebra problems: earthmover distance, mixed norms, sampling in the turnstile model, compressed sensing, graph sparsifiers, regression • Optimally solves sumi=1n G(xi) problems [Braverman, Ostrovsky]

  16. Estimation error ¼ |x|2/B • Can be used to find the “heavy hitters” • It is a linear map x -> S ¢ x • Easy to maintain under updates Subsampling + Heavy Hitters • CountSketch [Charikar,Chen,Farach-Colton]: • Give each coordinate i a random ¾(i) 2 {-1,1} • Randomly partition coordinates into B buckets, maintain cj = Σi s.t. h(i) = j¾(i)¢xi in j-th bucket • Estimate xi as ch(i)

  17. Subsampling + Heavy Hitters (n) • Subsampling [Indyk,W]: • Create nested sequence of subsets of [n] • [n] = Llog n¶ Llog n - 1¶ … ¶ L0 • Li contains about 2i random coordinates • Run CountSketch to find heavy hitters of each xLi • Estimate number of coordinates “at every scale” • Obtain a rough approximation x’ to x Number of coordinates n1/2 x 2 Rn: 1 1 n1/4 n1/3 Value

  18. Ax A = x (3) Characterization of Turnstile Algorithms • All known algorithms in the turnstile model have the form: 1. Choose a random matrix A independent of x 2. Maintain the “linear sketch” Ax in the stream 3. Output a function of Ax • Question (?!): does the optimal algorithm for any function in the turnstile model have this form? • [Li, Nguyen, W] Yes, up to a factor of log n in the space • Some caveats, e.g., can’t necessarily store A in low space • For lower bounds doesn’t matter, gives simpler proof strategy since just need to rule out linear sketches

  19. Talk Outline • Data Stream Model and Sample Results • Distinct Elements • Frequency Moments • Characterization of Algorithms • Connections to Other Areas • Compressed Sensing • Linear Algebra • Machine Learning

  20. x x2 Compressed Sensing • Compute a sketch A¢x with a small number of rows (a.k.a. measurements) • Output x’ which approximates x in the sense that |x’-x|p· (1+ε) |x-xk|q where xk is the best k-sparse approximation to x • Similar to heavy hitters problem solved by CountSketch • Variations of CountSketch + subsampling: • Can design algorithms with near-optimal number of measurements as a function of various ε, k, p, q [Price, W] • For p = q = 2, can reduce number of measurements by adaptively invoking CountSketch [Indyk, Price, W]

  21. Linear Algebra • Least squares regression • Fitting points to a line, or more generally a subspace • minx |Ax-b|2 for n x d matrix A, n x 1 vector b • Typically n >> d, i.e., the problem is over-constrained

  22. Linear Algebra • If S is a random projection matrix: • compute S*A and S*b, • solve minx |SAx-Sb|2 • Intuition: randomly rotate the column span of [A°b], then drop all but first O(d) coordinates • (0, 0, 0, …, 0, 1) 2 Rn • After rotation approximately: • (± 1/n1/2, …, ± 1/n1/2) • Drop all but first d coordinates • and rescale by (n/d)1/2 • (± 1/d1/2, …, ± 1/d1/2) 2 Rd

  23. Linear Algebra • 1+ε approximation in O(nd log n) + poly(d/ε) time using Fast Johnson Lindenstrauss Transforms (restricted family of projections) • If replace S with CountSketch, this still works! [Clarkson, W] • Leads to running time O(nnz(A)) + poly(d/ε) time, where nnz(A) is the number of non-zero entries of A • Low Rank Approximation • Using CountSkech instead of Fast Johnson Lindenstrauss improves running time from O(nd log n) to O(nnz(A)) [Clarkson, W] • Beautiful followup works by Li, Mahoney, Meng, Miller, Nelson, Nguyen, Peng

  24. Machine Learning • CountSketch can be used to estimate inner products • Estimate <x,y> as <Sx, Sy> • E[<Sx, Sy>] = <x,y> • Var[<Sx, Sy>] · |x|2 |y|2/B • Replace expensive inner product computations in classification algorithms with approximations via CountSketch • perceptron and minimum enclosing ball [Clarkson, Hazan, W] • Often interested in non-linear kernel transformations of input points: x1, …, xn -> f(x1), …, f(xn) • “Tensor product” CountSketch of Pagh gives subspace embeddings of the polynomial kernel [Avron, Nguyen, W]

  25. Conclusions • Many data stream and sketching techniques give efficient ways of “compressing” big data – a broadly applicable goal in computer science • Compressed sensing, graph algorithms, linear algebra, machine learning… • Recently been looking at shape-fitting and clustering problems, etc. • Also useful for proving lower bounds in other areas, e.g., number of measurements in sparse recovery [DoBa,Indyk,Price,W] • I’m sure there are many other unexplored areas • Thank you!

More Related