1 / 29

Classical mathematics and new challenges

Explore the ancient and classical algorithms in mathematics and their applications in modern challenges. Discover geometric constructions, Euclidean algorithms, and more.

rgabriel
Download Presentation

Classical mathematics and new challenges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Classical mathematics and new challenges Theorems and Algorithms László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052 lovasz@microsoft.com

  2. Geometric constructions Euclidean algorithm Newton’s method Gaussian elimination Algorithmic vs. structural mathematics ancient and classical algorithms

  3. Given find rational approximation and such that An example: diophantine approximation and continued fractions continued fraction expansion

  4. recursive functions, Λ-calculus, Turing-machines Church, Turing, Post algorithmic and logical undecidability Church, Gödel A mini-history of algorithms 30’s:Mathematical notion of algorithms

  5. sorting searching arithmetic … Travelling Salesman matching network flows factoring … 50’s, 60’s:Computers the significance of running time simple and complex problems

  6. late 60’s-80’s:Complexity theory Time, space, information complexity Nondeterminism, good characteriztion, completeness Polynomial hierarchy Classification of many real-life problems into P vs. NP-complete Randomization, parallelism P=NP?

  7. algorithms negative results topology algebraic geometry coding theory factoring volume computation semidefinite optimization 90’s: Increasing sophistication upper and lower bounds on complexity

  8. Higlights of the 90’s: Approximation algorithms positive and negative results Probabilistic algorithms Markov chains, high concentration, nibble methods, phase transitions Pseudorandom number generators from art to science: theory and constructions

  9. maximize Approximation algorithms: The Max Cut Problem NP-hard …Approximations?

  10. Arora-Lund-Motwani- Sudan-Szegedy ’92 Hastad NP-hard with 6% error Polynomial with 12% error Goemans-Williamson ’93 Easy with 50% error Erdős~’65 ??? (Interactive proof systems, PCP) (semidefinite optimization)

  11. Algorithms and probability Randomized algorithms (making coin flips): important applications (primality testing, integration, optimization, volume computation, simulation) difficult to analyze Algorithms with stochastic input: even more important applications even more difficult to analyze Difficulty: after a few iterations, complicated functions of the original random variables arise.

  12. New methods in probability: Strong concentration (Talagrand) Laws of Large Numbers: sums of independent random variables is strongly concentrated General strong concentration: very general “smooth” functions of independent random variables are strongly concentrated Nibble, martingales, rapidly mixing Markov chains,…

  13. O(q)? Want: such that: Few vectors - any 3 linearly independent - every vector is a linear combination of 2 Every finite projective plane of order q has a complete arc of size qpolylog(q). Kim-Vu Example (was open for 30 years)

  14. at random Second idea: choose ????? Solution: Rödl nibble + strong concentration results First idea: use algebraic construction (conics,…) gives only about q

  15. Driving forces for the next decade New areas of applications The study of very large structures More tools from classical areas in mathematics

  16. New areas of application: interaction between discrete and continuous Biology:genetic code population dynamics protein folding Physics:elementary particles, quarks, etc. (Feynman graphs) statistical mechanics (graph theory, discrete probability) Economics:indivisibilities (integer programming, game theory) Computing:algorithms, complexity, databases, networks, VLSI, ...

  17. Very large structures • internet • VLSI • databases How to model them? non-constant but stable partly random • genetic code • brain • animal • ecosystem • -economy • society

  18. up to a bounded number of additional nodes except for “fringes” of bounded depth tree-decomposition embedable in a fixed surface Very large structures: how to model them? Graph minors Robertson, Seymour, Thomas If a graph does not contain a given minor, then it is essentially a 1-dimensional structure of essentially 2-dimensional pieces.

  19. given >0 and k>1, # of parts is between k and f(k, ) difference at most 1 with k2 exceptions for subsets X,Y of the two parts, # of edges between X and Y is p|X||Y|  n2 Very large structures: how to model them? Regularity Lemma Szeméredi 74 The nodes of  graph can be partitioned into a bounded number of essentially equal parts so that almost all bipartite graphs between 2 parts are essentially random (with different densities).

  20. Very large structures • -internet • VLSI • databases • genetic code • brain • animal • ecosystem • economy • society How to model them? How to handle them algorithmically? heuristics/approximation algorithms linear time algorithms sublinear time algorithms (sampling) A complexity theory of linear time?

  21. by a membership oracle; , convex Given: Want: volume of K with relative error ε in n More and more tools from classical math Example: Volume computation Not possible in polynomial time, even if ε=ncn. Elekes, Bárány, Füredi Possible in randomized polynomial time, for arbitrarily small ε. Dyer, Frieze, Kannan

  22. Complexity: For self-reducible problems, counting  sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies must be exponential in n * * * * * * * * *

  23. Complexity: For self-reducible problems, counting  sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies by sampling by sampling …

  24. Complexity: For self-reducible problems, counting  sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K

  25. Complexity: For self-reducible problems, counting  sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K K” K’ F Probability: use eigenvalue gap Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair

  26. Complexity: For self-reducible problems, counting  sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K Dyer Frieze Kannan 1989 Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair Enough to prove isoperimetric inequality for subsets of K Differential geometry: Isoperimetric inequality Probability: use eigenvalue gap

  27. Statistics: Better error handling Dyer-Frieze 1993 Differential equations: bounds on Poincaré constant Paine-Weinberger bisection method, improved isoperimetric inequality LL-Simonovits 1990 Optimization: Better prepocessing LL-Simonovits 1995 Functional analysis: isotropic position of convex bodies achieving isotropic position Kannan-LL-Simonovits 1998 Log-concave functions: reduction to integration Applegate-Kannan 1992 Convex geometry: Ball walk LL 1992

  28. Geometry: projective (Hilbert) distance affine invariant isoperimetric inequality analysis of hit-and-run walk LL 1999 Differential equations: log-Sobolev inequality elimination of “start penalty” for lattice walk Frieze-Kannan 1999 log-Cheeger inequality elimination of “start penalty” for ball walk Kannan-LL 1999 Scientific computing: non-reversible chains mix better; lifting Diaconis-Holmes-Neal Feng-LL-Pak walk with inertia Aspnes-Kannan-LL

  29. More and more tools from classical math Linear algebra : eigenvalues semidefinite optimization higher incidence matrices homology theory Geometry : geometric representations convexity Analysis: generating functions Fourier analysis, quantum computing Number theory: cryptography Topology, group theory, algebraic geometry, special functions, differential equations,…

More Related