140 likes | 148 Views
This paper presents tight lower bounds and algorithms for approximate Carathéodory problems, exploring their applications and extending them to various domains.
E N D
Tight Bounds for Approximate Carathéodory and Beyond Vahab Mirrokni Google Renato Paes Leme Google Adrian Vladu MIT Sam Chiu-wai Wong UC Berkeley
Given a collection of points and a point in the convex hull, is in the convex combination of d+1 points of . Given a collection of points and a point in the convex hull, then for and there is in the convex hull of points of with . Exact Carathéodory Theorem Approximate Carathéodory Theorem
Brief History of Approximate Carathéodory • [Barman, STOC’15]: there exist points such that • Application: -nets for There is a set of points that approximates w.r.t. -norm. • Bilinear programs: programs of the type can be solved by enumerating over possible values of . • PTAS for Nash equilibrium in s-sparse bi-matrix games in • additive PTAS for the k-densest subgraph problem for bounded degree • applications in combinatorics • lower bound of , so tight for
This paper: • A deterministic time algorithm • A lower bound showing that sparsity is necessary for Brief History of Approximate Carathéodory • [Barman, STOC’15]: there exist points such that • [Maurey, 1980]: functional analysis • [Shalev-Shwartz, Srebro and Zhang, 2010]: sparsity/accuracy tradeoffs in linear regressions • [Novikoff, 1962]: analysis of the perceptron algorithm implies an version
Plan #1: (1) solve the exact problem. (2) sample from it. Sparsification via Sampling Exact solution Approximate optimal solution
Approximate Carathéodory from Sampling • Solve the exact Carathéodory problem:for • Sample k vectors with probabilities given by • Use concentration bounds to argue that is close to .More precisely, use Khintchine’s inequality to show that when k is sufficiently large
Plan #1: (1) solve the exact problem. (2) sample from it. Sparsification via Optimization Exact solution Approximate optimal solution Plan #2: (1) write as a convex optimization problem. (2) perform k steps of iterative method.
Approximate Carathéodory from Optimization • There is a natural convex function to minimize:
Approximate Carathéodory from Optimization • Move to the dual: • Primal formulation: • Saddle point formulation: • Dual formulation: • Subgradients are now very nice: , so • After finding subgradient, must obtain new point from primal space Sion’s Theorem Envelope Theorem: if then for .
solve to accuracy • #iterations depends on choice of mirror map, in our case yields iterations • construct primal solution from subgradient history Approximate Carathéodory from Optimization • In order to obtain sparsity, we pass to the dual:Primal formulation:Saddle point formulation: Dual formulation: • Subgradients are now very nice: , so • This regime is equivalent to Frank-Wolfe for approximating • We actually show how MD recovers the Frank-Wolfe iteration most expensive step
Tight Lower Bound • Sparsity required for norms, • Based on an idea by [Klein, Young] to boundthe # of iterations of Danzig-Wolfe methodsin linear programming. • Exhibit hard instance, certify for each subsetof columns of size that their convex hullis far from Provide an instance for which
There’s More • Are ℓp norms useful? If input points have k-nonzeros, we obtain better sparsity O(log k/ϵ2) • Method extends to arbitrary domains, such as Birkhoff-von Neumann polytope • If we aim for error ϵ in pointwise ℓp norm, we obtain a decomposition with sparsity O(pn2/p/ϵ2) • Same machinery applies to flow rounding, SVM, etc ≈ + .5 x .5 x .1 .4 .5 .5 .1 .4 .5 .1 .4