60 likes | 200 Views
(More on) characterizing dominant-strategy implementation in quasilinear environments (see, e.g., Nisan’s review: Chapter 9 of Algorithmic Game Theory book). Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
E N D
(More on) characterizing dominant-strategy implementation in quasilinear environments(see, e.g., Nisan’s review: Chapter 9 of Algorithmic Game Theory book) Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University
Some characterization results (see Nisan’s review chapter) • Prop. A mechanism is incentive compatible iff • Agent i’s payment does not depend on his reported vi, but only on the alternative chosen, and • The mechanism picks an outcome (within its range) that optimizes for each player: f in argmaxo{ vi(o) – pi(o)} • Can also characterize in the space of social choice functions only: • Def. f satisfies Weak Monotonicity (WMON) if f(vi ,v-i) = a ≠ b = f(v’i ,v-i) => v’i(b) - vi(b) ≥ v’i(a) - vi(a) • In words: if social choice changes when a single agent changes his valuation, then it must be because the agent increased his value of the new choice relative to his value of the old choice. • Thm. If a mechanism is incentive compatible, then f satisfies WMON. If domains of preferences Vi are convex sets, then for every f that satisfies (even just local) WMON, there exists a payment rule such that the mechanism is incentive compatible. • First part is easy to prove, see page 227 of Algorithmic Game Theory book • Second part holds if • outcome space is finite [Saks and Yu EC-05], or • loop integral of f is zero around every sufficiently small triangle [Archer&Kleinberg EC-08] • They also show that the theorem applies to non-convex Vi by studying functions f that apply to the convex hull • They also show how a truthful f can be stitched together from locally truthful fi’s • Somewhat unsatisfactory: it is not clear exactly what the WMON functions are. WMON is a local condition. Is there a global condition? Yes for unrestricted or very restricted Vi. Largely open for practical problems that lie in between.
Unrestricted Vi and affine maximizers • Affine maximizers are a generalization of Groves mechanisms • f in argmaxo{ co + iwivi(o) } • Prop. If the payment for agent i is hi(v-i) - j≠i (wj/wi) vj(o) – co/wi, then the mechanism is incentive compatible • Thm (Roberts 1979). If |O| ≥ 3, f is onto O, Vi = RO for every i, and the mechanism is incentive compatible, then f is an affine maximizer
Single-parameter domains • Setting: • Vi is one-dimensional, i.e., Vi in R • For each agent, there is a set of equally-preferred “winning” outcomes and equally preferred “losing” outcomes • Assume “normalized”, that is, losing agents pay 0 • Thm. Mechanism is incentive compatible iff • f is monotone in every vi, and • every winning agent pays his critical value
(Essentially) uniqueness of prices • Thm. • Assume the domains of Vi are connected sets (in the usual metric in Euclidean space) • Let (f, p1,…pn) be an incentive compatible mechanism • The mechanism (f, p’1,…p’n) is incentive compatible iff p’i(v1,…vn) = pi (v1,…vn) + hi (v-i)
Network interpretation of incentive compatibility constraints • See, e.g., the overview article by Rakesh Vohra that is posted on the course web page • Similar approach also available for Bayes-Nash implementation [Weak monotonicity and Bayes–Nash incentive compatibilityGames and Economic Behavior, Volume 61, Issue 2, November 2007, Pages 344-358Rudolf Müller, Andrés Perea, Sascha Wolf]