350 likes | 359 Views
Highlights of the 90's, approximation algorithms, pseudorandom number generators, algorithms with stochastic input, new methods in probability, large structures modeling, linear time algorithms, geometric representations of graphs.
E N D
Discrete mathematics: the last and next decade László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052 lovasz@microsoft.com
Higlights of the 90’s: Approximation algorithms positive and negative results Discrete probability Markov chains, high concentration, nibble methods, phase transitions Pseudorandom number generators from art to science: theory and constructions
maximize Approximation algorithms: The Max Cut Problem NP-hard …Approximations?
Easy with 50% error Erdős~’65: ??? Arora-Lund-Motwani- Sudan-Szegedy ’92: Hastad NP-hard with 6% error (Interactive proof systems, PCP) Polynomial with 12% error Goemans-Williamson ’93: (semidefinite optimization)
Discrete probability random structures randomized algorithms algorithms on random input statistical mechanics phase transitions high concentration pseudorandom numbers
Algorithms and probability Randomized algorithms (making coin flips): important applications (primality testing, integration, optimization, volume computation, simulation) difficult to analyze Algorithms with stochastic input: even more important applications even more difficult to analyze
Difficulty: after a few iterations, complicated function of the original random variables arise. New methods in probability: Strong concentration (Talagrand) Laws of Large Numbers: sums of independent random variables is strongly concentrated General strong concentration: very general “smooth” functions of independent random variables are strongly concentrated Nible, martingales, rapidly mixing Markov chains,…
qpolylog(q) Want: such that: Few vectors - any 3 linearly independent - every vector is a linear combination of 2 Every finite projective plane of order q has a complete arc of size qpolylog(q). Kim-Vu Example (was open for 30 years)
at random Second idea: choose ????? First idea: use algebraic construction (conics,…) gives only about q Solution: Rödl nibble + strong concentration results
Driving forces for the next decade New areas of applications The study of very large structures More tools from classical areas in mathematics More applications in classical areas?!
New areas of application Biology:genetic code population dynamics protein folding Physics:elementary particles, quarks, etc. (Feynman graphs) statistical mechanics (graph theory, discrete probability) Economics:indivisibilities (integer programming, game theory) Computing:algorithms, complexity, databases, networks, VLSI, ...
Very large structures • internet • VLSI • databases How to model these? non-constant but stable partly random • genetic code • brain • animal • ecosystem • -economy • society
up to a bounded number of additional nodes tree-decomposition embeddable in a fixed surface except for “fringes” of bounded depth Very large structures: how to model them? Graph minors Robertson, Seymour, Thomas If a graph does not contain a given minor, then it is essentially a 1-dimensional structure of 2-dimensional pieces.
given >0 and k>1, the number of parts is between k and f(k, ) difference at most 1 with k2 exceptions for subsets X,Y of the two parts, # of edges between X and Y is p|X||Y| n2 Very large structures: how to model them? Regularity Lemma Szeméredi The nodes of every graph can be partitioned into a bounded number of essentially equal parts so that almost all bipartite graphs between 2 parts are essentially random (with different densities).
Very large structures • -internet • VLSI • databases • genetic code • brain • animal • ecosystem • economy • society How to model these? How to handle them algorithmically? heuristics/approximation algorithms linear time algorithms sublinear time algorithms (sampling) A complexity theory of linear time?
More and more tools from classical math Linear algebra : eigenvalues semidefinite optimization higher incidence matrices homology theory Geometry : geometric representations of graphs convexity Analysis: generating functions Fourier analysis, quantum computing Number theory: cryptography Topology, group theory, algebraic geometry, special functions, differential equations,…
3-connected planar graph Every 3-connected planar graph is the skeleton of a polytope. Steinitz Example 1: Geometric representations of graphs
Coin representation Koebe (1936) Every planar graph can be represented by touching circles
Polyhedral version Every 3-connected planar graph is the skeleton of a convex polytope such that every edge touches the unit sphere Andre’ev “Cage Represention”
From polyhedra to circles horizon
Cage representation Riemann Mapping Theorem Koebe Sullivan
The Colin de Verdière number G: connected graph Roughly:(G) = multiplicity of second largest eigenvalue of adjacency matrix Largest has multiplicity 1. But:maximize over weighting the edges and diagonal entries (But:non-degeneracy condition on weightings)
Representation of G in R3 basis of nullspace of M Colin de Verdière, using pde’s Van der Holst, elementary proof μ(G)3 G is a planar =3 if G is 3-connected may assume second largest eigenvalue is 0
G 3-connected planar nullspace representation gives planar embedding in S2 The vectors can be rescaled so that we get a Steinitz representation. LL L-Schrijver
Nullspace representation from the CdV matrix eigenfunctions of the Laplacian ~ Cage representation Riemann Mapping Theorem Koebe Sullivan
by a membership oracle; with relative error ε Example 2: volume computation , convex Given: Want: volume of K Not possible in polynomial time, even if ε=ncn. Possible in randomized polynomial time, for arbitrarily small ε.
Complexity: For self-reducible problems, counting sampling (Jerrum-Valiant-Vazirani) Enough to sample from convex bodies Algorithmic results: Use rapidly mixing Markov chains (Broder; Jerrum-Sinclair) Enough to estimate the mixing rate of random walk on lattice in K Dyer Frieze Kannan 1989 Graph theory (expanders): use conductance to estimate eigenvalue gap Alon, Jerrum-Sinclair Enough to prove isoperimetric inequality for subsets of K Differential geometry: Isoperimetric inequality Classical probability: use eigenvalue gap
Use conductance to estimate mixing rate Jerrum-Sinclair Enough to prove isoperimetric inequality for subsets of K Differential geometry: properties of minimal cutting surface Isoperimetric inequality Differential equations: bounds on Poincaré constant Paine-Weinberger bisection method, improved isoperimetric inequality LL-Simonovits 1990 Log-concave functions: reduction to integration Applegate-Kannan 1992 Brunn-Minkowski Thm: Ball walk LL 1992
Log-concave functions: reduction to integration Applegate-Kannan 1992 Convex geometry: Ball walk LL 1992 Statistics: Better error handling Dyer-Frieze 1993 Optimization: Better prepocessing LL-Simonovits 1995 Functional analysis: isotropic position of convex bodies achieving isotropic position Kannan-LL-Simonovits 1998
Geometry: projective (Hilbert) distance affin invariant isoperimetric inequality analysis if hit-and-run walk LL 1999 Differential equations: log-Sobolev inequality elimination of “start penalty” for lattice walk Frieze-Kannan 1999 log-Cheeger inequality elimination of “start penalty” for ball walk Kannan-LL 1999
History: earlier highlights 60:polyhedral combinatorics, polynomial time, random graphs, extremal graph theory, matroids 70:4-Color Theorem,NP-completeness, hypergraph theory, Szemerédi Lemma 80:graph minor theory, cryptography
Highlights if the last 4 decades • New applications • physics, biology, computing, economics • 3. Main trends in discrete math • -Very large structures • -More and more applications of methods from • classical math • -Discrete probability
Optimization: discrete linear semidefinite ?