210 likes | 228 Views
The Computational Complexity of Searching for Predictive Hypotheses. Shai Ben-David Computer Science Dept. Technion. Introduction. The complexity of leaning is measured mainly along two axis: Information and computation . Information complexity enjoys a rich theory that yields
E N D
The Computational Complexityof Searching for Predictive Hypotheses Shai Ben-David Computer Science Dept. Technion
Introduction The complexity of leaning is measured mainly along two axis: Information andcomputation. Information complexity enjoys a rich theory that yields rather crisp sample size and convergence rate guarantees. The focus of this talk is the Computational complexity of learning. While playing a critical role in any application,its theoretical understanding is far less satisfactory.
Outline of this Talk 1.Some background. 2. Survey of recent pessimistic hardness results. 3. New efficient learning algorithms for some basic learning architectures.
The Label Prediction Problem Formal Definition Example Given some domain setX Data files of drivers A sampleSof labeled members of X is generated by some (unknown) distribution Drivers in a sample are labeled according to whether they filed an insurance claim For a next point x, predict its label Will the current customer file a claim?
The Agnostic Learning Paradigm Choose a Hypothesis Class Hof subsets of X. For an input sample S, find some h in Hthat fits Swell. For a new point x, predict a label according to its membership in h.
The Mathematical Justification If His not too rich (has small VC-dimension) then, for every h in H, the agreement ratio of h on the sample S is a good estimate of its probability of success on a new x .
The Mathematical Justification - Formally If Sis sampled i.i.d., by some Dover X {0, 1} then with probability > 1- Agreement ratio Probability of success
The Model Selection Issue Output of the the learning Algorithm Best regressor for P The Class H Approximation Error Estimation Error Computational Error
The Computational Problem Input:A finite set of {0, 1}-labeled points Sin Rn. Output:Some ‘hypothesis’ h in Hthat maximizes the number of correctly classified points of S.
Hardness-of-Approximation Results For each of the following classes, approximating the best agreement rate for h inH(on a given input sample S) up to some constant ratio, is NP-hard: Monomials Constant width Monotone Monomials Half-spaces Balls Axis aligned Rectangles Threshold NN’s with constant 1st-layer width BD-Eiron-Long Bartlett- BD
The SVM Solution Rather than bothering with non-separable data, make the data separable - by embedding it into some high-dimensional Rn
A Problem with the SVM method In “most” cases the data cannot be made separable unless the mapping is intodimension(|X|) . This happens even for classes of small VC-dimension. For “most” classes, no mapping for which concept-classified data becomes separable, has large margins. In all of these cases generalization is lost!
Data-Dependent Success Note that the definition of success for agnostic learning is data-dependent; The success rate of the learner on S is compared to that of the best h in H. We extend this approach to a data-dependent success definition for approximations; The required success-rate is a function of the input data.
A New Success Criterion A learning algorithm Ais m-marginsuccessful if, for every input S Rn {0,1}, |{(x,y) S: A(s)(x) = y}|>|{(x,y): h(x)=y and d(h, x) > m}| forevery half-space h.
Some Intuition If there exist some optimal h which separates with generous margins, then a m-margin algorithm must produce an optimal separator. On the other hand, If every good separator can be degraded by small perturbations, then a m- margin algorithm can settle for a hypothesis that is far from optimal.
A New Positive Result For every positive m , there is an efficient m-margin algorithm. That is, the algorithm that classifies correctly as many input points as any half-space can classify correctly with margin m .
The positive resultFor every positive m ,there is a m - marginalgorithm whose running time is polynomial in |S| and n . A Complementing Hardness Result Unless P = NP , no algorithm can do this in time polynomial in 1/m (and in |S| and n ).
A m-margin Perceptron Algorithm On input S consider all k-size sub-samples. For each such sub-sample find its largest margin separating hyperplane. Among all the (~|S|k) resulting hyperplanes. choose the one with best performance on S . (The choice of k is a function of the desired margin m ,k ~ m-2 ).
Other m-margin Algorithms Each of the following algorithms can replace the “find the largest margin separating hyperplane” The usual “Perceptron Algorithm”. “Find a point of equal distance from x1, … xk “. Phil Long’s ROMMA algorithm. These are all very fast online algorithms.
Directions for Further Research Can similar efficient algorithms be derived for more complex NN architectures? How well do the new algorithms perform on real data sets? Can the ‘local approximation’ results be extended to more geometric functions?