1 / 18

Incentive Compatible Regression Learning

Incentive Compatible Regression Learning. Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia. Lecture Outline. Model. Degenerate. Uniform. General. Until now: applications of learning to game theory. Now: merge. The model: Motivation The learning game Three levels of generality:

torie
Download Presentation

Incentive Compatible Regression Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Incentive Compatible Regression Learning Ofer Dekel, Felix A. Fischer and Ariel D. Procaccia

  2. Lecture Outline Model Degenerate Uniform General • Until now: applications of learning to game theory. Now: merge. • The model: • Motivation • The learning game • Three levels of generality: • Distributions which are degenerate at one point • Uniform distributions • The general setting

  3. Motivation Model Degenerate Uniform General • Internet search company: improve performance by learning ranking function from examples. • Ranking function assigns real value to every (query,answer). • Employ experts to evaluate examples. • Different experts may have diff. interests and diff. ideas of good output. • Conflict  Manipulation  Bias in training set.

  4. Jaguar vs. Panthera Onca Model Degenerate Uniform General (“Jaguar”, jaguar.com)

  5. Regression Learning Model Degenerate Uniform General • Input space X=Rk ((query,answer) pairs). • Function class F:XR (ranking functions). • Target function o:XR. • Distribution  over X. • Loss function l(a,b). • Abs. loss: l (a,b)=|a-b|. • Squared loss: l (a,b)=(a-b)2. • Learning process: • Given: Training set S={(xi,o(xi))}, i=1,...,m, xi sampled from . • R(h)=Ex[l (h(x),o(x))]. • Find: hF to minimize R(h).

  6. Our Setting Model Degenerate Uniform General • Input space X=Rk ((query,answer) pairs). • Function class F (ranking functions). • Set of players N={1,...,n} (experts). • Target functions oi:XR. • Distributions i over X. • Training set?

  7. The Learning Game Model Degenerate Uniform General • i:controls xij, j=1,...,m, sampled w.r.t. i (common knowledge). • Private info of i: oi(xij)=yij, j=1,...,m. • Strategies of i: y’ij, j=1,...,m. • h is obtained by learning S={(xij,y’ij)} • Cost of i: Ri(h)=Exi [l (h(x),oi(x))]. • Goal: Social Welfare (please avg. player).

  8. Example: The learning game with ERM Model Degenerate Uniform General • Parameters: X=R, F=Constant Functions, l (a,b)=|a-b|, N={1,2}, o1(x)=1, o2(x)=2, 1=2=uniform dist on [0,1000]. • Learning algorithm: Empirical Risk Minimization (ERM) • Minimize R’(h,S)=1/|S|  (x,y)Sl (h(x),y). 2 1

  9. Degenerate Distributions: ERM with abs. loss Model Degenerate Uniform General • The Game: • Players: N={1,...n} • i: degenerate at xi. • i: controls xi. • Private info of i: oi(xi)=yi. • Strategies of i: y’i. • Cost of i: Ri(h)= l (h(xi),yi). • Theorem: If l = absolute loss and F is convex. Then ERM is group incentive compatible.

  10. ERM with superlinear loss Model Degenerate Uniform General • Theorem: l is “superlinear”, F is convex, |F|2, F is not “full” on x1,...,xn. Then y1,...,yn such that there is incentive to lie. • Example: X=R, F=Constant Functions, l (a,b)=(a-b)2, N={1,2}.

  11. Uniform dist. over samples Model Degenerate Uniform General • The Game: • Players: N={1,...n} • i: Discrete uniform on {xi1,...,xim} • i:controls xij, j=1,...,m • Private info of i: oi(xij)=yij. • Strategies of i: y’ij, j=1,...,m. • Cost of i: Ri(h)= R’i(h,S)= 1/mjl (h(xij),yij).

  12. ERM with abs. loss is not IC Model Degenerate Uniform General 1 0

  13. VCG to the Rescue Model Degenerate Uniform General • Use ERM. • Each player pays jiR’j(h,S). • Each player’s total cost is R’i(h,S)+jiRj’(h,S) = jR’j(h,S). • Truthful for any loss function. • VCG has many faults: • Not group incentive compatible. • Payments problematic in practice. • Would like (group) IC mechanisms w/o payments.

  14. Mechanisms w/o Payments Model Degenerate Uniform General • Absolute loss. • -approximation mechanism: gives an -approximation of the social welfare. • Theorem (upper bound): There exists a group IC 3-approx mechanism for constant functions over Rk and homogeneous linear functions over R. • Theorem (lower bound): There is no IC (3-)-approx mechanism for constant/hom. lin. functions over Rk. • Conjecture: There is no IC mechanism with bounded approx. ratio for hom. lin. functions over Rk, k2.

  15. Proof of Lower Bound k k-1 k k k-1 k-1 k k k-1 k-1 Model Degenerate Uniform General 3 2 1 0 1- 2- 3-

  16. Proof of Lower Bound k k k-1 k-1 Model Degenerate Uniform General 3 2 1 0 1- 2- k k k-1 k-1 3-

  17. Generalization Model Degenerate Uniform General • Theorem: If f, • (1) i, |R’i(f,S)-Ri(f)|  /2 • (2) |R’(f,S)-1/ni Ri(f)|  /2 Then: • (Group) IC in uniform  -(group) IC in general. • -approx in uniform  -approx up to additive  in general. • If F has bounded complexity, m=(log(1/)/), then cond. (1) holds with prob. 1-. • Cond. (2) is obtained if (1) occurs for all i. Taking /n adds factor of logn.

  18. Discussion Model Degenerate Uniform General • Given m large enough, with prob. 1- VCG is -truthful. This holds for any loss function. • Given m large enough, abs loss, mechanism w/o payments which is -group IC and 3-approx for constant functions and hom. lin. functions. • Most important direction for future work: extending to other models of learning, such as classification.

More Related