300 likes | 391 Views
Strategy-Proof Classification. Reshef Meir School of Computer Science and Engineering, Hebrew University. A joint work with Ariel. D. Procaccia and Jeffrey S. Rosenschein. Strategy-Proof Classification . Introduction Learning and Classification An Example of Strategic Behavior
E N D
Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey S. Rosenschein
Strategy-Proof Classification • Introduction • Learning and Classification • An Example of Strategic Behavior • Motivation: • Decision Making • Machine Learning • Our Model • Some Results
Introduction Motivation Model Results Classification The Supervised Classification problem: • Input: a set of labeled data points {(xi,yi)}i=1..m • output: a classifier c from some predefined concept class C( functions of the form f : X{-,+} ) • We usually want c to classify correctly not just the sample, but to generalize well, i.e .to minimize Risk(c) ≡ E(x,y)~D[ L(c(x)≠y) ] , Where D is the distribution from which we sampled the training data, L is some loss function.
Classification (cont.) Introduction Motivation Model Results • A common approach is to return the ERM, i.e. the concept in C that is the best w.r.t. the given samples (a.k.a. training data) • Try to approximate it if finding it is hard • Works well under some assumptions on the concept class C Should we do the same with many experts?
Introduction Motivation Model Results Strategic labeling: an example ERM 5 errors
Introduction Motivation Model Results There is a better classifier! (for me…)
Introduction Motivation Model Results If I will only change the labels… 2+4 = 6 errors
Introduction Motivation Model Results Decision making • ECB makes decisions based on reports from national banks • National bankers gather positive/negative data from local institutions • Each country reports to ECB • Yes/no decision taken at European level • Bankers might misreport their data in order to sway the central decision -
Introduction Motivation Model Results Machine Learning (spam filter) Managers Outlook Reported Dataset Labels Classification Algorithm Classifier (Spam filter)
Introduction Motivation Model Results Learning (cont.) • Some e-mails may be considered spam by certain managers, and relevant by others • A manager might misreport labels to bias the final classifier towards her point-of-view
Introduction Motivation Model Results A Problem is characterized by • An input space X • A set of classifiers (concept class) C Every classifier c C is a function c: X{+,-} • Optional assumptions and restrictions • Example 1: All Linear Separators in Rn • Example 2: All subsets of a finite set Q
Introduction Motivation Model Results A problem instance is defined by • Set of agentsI = {1,...,n} • A partial dataset for each agent i I, Xi = {xi1,...,xi,m(i)} X • For each xikXi agent i has a label yik{,} • Each pair sik=xik,yik is an example • All examples of a single agent compose the labeled dataset Si = {si1,...,si,m(i)} • The joint dataset S= S1 , S2 ,…, Sn is our input • m=|S| • We denote the dataset with the reported labels by S’
Introduction Motivation Model Results Input: Example – – + – – – + – + + + + – + + – X1 Xm1 X2 Xm2 X3 Xm3 Y1 {-,+}m1 Y2 {-,+}m2 Y3 {-,+}m3 S = S1, S2,…, Sn = (X1,Y1),…, (Xn,Yn)
Introduction Motivation Model Results Mechanisms • A Mechanism M receives a labeled dataset S’ and outputs c C • Private risk of i: Ri(c,S) = |{k: c(xik) yik}| / mi • Global risk: R(c,S) = |{i,k: c(xik) yik}| / m • We allow non-deterministic mechanisms • The outcome is a random variable • Measure the expected risk
Introduction Motivation Model Results ERM We compare the outcome of M to the ERM: c* = ERM(S) = argmin(R(c),S) r* = R(c*,S) c C Can our mechanism simply compute and return the ERM?
Introduction Motivation Model Results Requirements • Good approximation: SR(M(S),S) ≤ β∙r* • Strategy-Proofness: i,S,Si‘Ri(M(S-i , Si‘),S)≤ Ri(M(S),S) • ERM(S) is 1-approximating but not SP • ERM(S1) is SP but gives bad approximation Are there any mechanisms that guarantee both SP and good approximation?
Introduction Motivation Model Results Suppose |C|=2 • Like in the ECB example • There is a trivial deterministic SP 3-approximation mechanism • Theorem: There are no deterministic SP α-approximation mechanisms, for any α<3 R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008
Proof Introduction Motivation Model Results ? R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008 C = {“all positive”, “all negative”}
Randomization comes to the rescue Introduction Motivation Model Results R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008 • There is a randomized SP 2-approximation mechanism (when |C|=2) • Randomization is non-trivial • Once again, there is no better SP mechanism
Introduction Motivation Model Results Negative results • Theorem: There are concept classes (including linear separators), for which there are no SP mechanisms with constant approximation • Proof idea: • we first construct a classification problem that is equivalent to a voting problem • Then use impossibility results from Social-Choice theory to prove that there must be a dictator R. Meir, A. D. Procaccia and J. S. Rosenschein, On the Power of Dictatorial Classification, in submission.
Introduction Motivation Model Results More positive results • Suppose all agents control the same data points, i.e. X1 = X2 =…=Xn • Theorem: Selecting a dictator at random is SP and guarantees 3-approximation • True for any concept class C • 2-approximation when each Si is separable R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission.
Proof idea Introduction Motivation Model Results The average pair-wise distance between green dots, cannot be much higher than the average distance to the star
Generalization Introduction Motivation Model Results • So far, we only compared our results to the ERM, i.e. to the data at hand • We want learning algorithms that can generalize well from sampled data • with minimal strategic bias • Can we ask for SP algorithms?
Generalization (cont.) Introduction Motivation Model Model Results Results • There is a fixed distribution DX on X • Each agent holds a private function Yi : X {+,-} • Possibly non-deterministic • The algorithm is allowed to sample from DX and ask agents for their labels • We evaluate the result vs. the optimal risk, averaging over all agents, i.e.
Generalization (cont.) Introduction Motivation Model Model Results Results Y1 DX Y3 Y2
Generalization Mechanisms Introduction Motivation Model Results Our mechanism is used as follows: • Sample m data points i.i.d • Ask agents for their labels • Use the SP mechanism on the labeled data, and return the result • Does it work? • Depends on our game-theoretic and learning-theoretic assumptions
The “truthful approach” Introduction Motivation Model Results R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission. • Assumption A: Agents do not lie unless they gain at least ε • Theorem: W.h.p. the following occurs • There is no ε-beneficial lie • Approximation ratio (if no one lies) is close to 3 • Corollary: with enough samples, the expected approximation ratio is close to 3 • The number of required samples is polynomial in n and 1/ε
The “Rational approach” Introduction Motivation Model Results R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission. Assumption B: Agents always pick a dominant strategy, if one exists. Theorem: with enough samples, the expected approximation ratio is close to 3 The number of required samples is polynomial in 1/ε (and not on n)
Introduction Motivation Model Results Previous and future work • A study of SP mechanisms in Regression learning 1 • No SP mechanisms for Clustering 2 Future directions • Other concept classes • Other loss functions • Alternative assumptions on structure of data 1 O. Dekel, F. Fischer and A. D. Procaccia, Incentive Compatible Regression Learning, SODA 2008 2 J. Perote-Peña and J. Perote. The impossibility of strategy-proof clustering, Economics Bulletin, 2003.