150 likes | 166 Views
This presentation discusses mechanisms that are worst-case hard to manipulate, including voting mechanisms and other mechanisms. It also covers the usual-case hardness of manipulation and the problems with mechanisms that are worst-case hard to manipulate. Additionally, it explores the criticisms of truthful mechanisms and provides proofs for the NP-completeness of executing the optimal truthful mechanism and finding a beneficial insincere revelation.
E N D
Mechanism design for computationally limited agents(previous slide deck discussed the case where valuation determination was complex) Tuomas Sandholm Computer Science Department Carnegie Mellon University
Part IMechanisms that are computationally (worst-case) hard to manipulate
Voting mechanisms that are worst-case hard to manipulate • Bartholdi, Tovey, and Trick. 1989. The computational difficulty of manipulating an election, Social Choice and Welfare, 1989. • Bartholdi and Orlin. Single Transferable Vote Resists Strategic Voting, Social Choice and Welfare, 1991. • Conitzer, V., Sandholm, T., Lange, J. 2007. When are elections with few candidates hard to manipulate? JACM. • Conitzer, V. and Sandholm, T. 2003. Universal Voting Protocol Tweaks to Make Manipulation Hard.International Joint Conference on Artificial Intelligence (IJCAI). • Elkin & Lipmaa …
2nd-chance mechanism [in paper “Computationally Feasible VCG Mechanisms” by Nisan & Ronen, EC-00, JAIR] • (Interesting unrelated fact in the paper that has had lots of follow-on research: Any VCG mechanism that is maximal in range is IC) • Observation: only way an agent can improve its utility in a VCG mechanism where an approximation algorithm is used is by helping the algorithm find a higher-welfare allocation • Second-chance mechanism: let each agent i submit a valuation fn vi and an appeal fn li: V->V. Mechanism (using alg k) computes k(v), k(li(v)), k(l2(v)), … and picks the among those the allocation that maximizes welfare. Pricing based on unappealed v.
Other mechanisms that are worst-case hard to manipulate • O’Connell and Stearns. 2000. Polynomial Time Mechanisms for Collective Decision Making, SUNYA-CS-00-1 • …
Impossibility of usual-case hardness • For voting: • Procaccia & Rosenschein JAIR-97 • Assumes constant number of candidates • Impossibility of avg-case hardness for Junta distributions (that seem hard) • Conizer & Sandholm AAAI-06 • Any voting rule, any number of candidates, weighted voters, coalitional manipulation • Thm. <voting rule, instance distribution> cannot be usually hard to manipulate if • It is weakly monotone (either c2 does not win, or if everyone ranks c2 first and c1 last then c1 does not win), and • If there exists a manipulation by the manipulators, then with high probability the manipulators can only decide between two candidates • Elections can be Manipulated Often by Friedgut, Kalai Nisan FOCS-08 • For 3 candidates • Shows that randomly selected manipulations work with non-vanishing probability • Isaksson, Kindler&Mossel FOCS-10 • For more than 3 candidates • Still open directions available • Multi-stage voting protocols • Combining randomization and manipulation hardness… • Open for other settings
Problems with mechanisms that are worst-case hard to manipulate • Worst-case hardness does not imply hardness in practice • If agents cannot find a manipulation, they might still not tell the truth • One solution avenue: Mechanisms like the one in Part III of this slide deck...
Part IIIBased on “Computational Criticisms of the Revelation Principle” by Conitzer & Sandholm
Criticizing truthful mechanisms • Theorem. There are settings where: • Executing the optimal truthful (in terms of social welfare) mechanism is NP-complete • There exists an insincere mechanism, where • The center only carries out polynomial computation • Finding a beneficial insincere revelation is NP-complete for the agents • If the agents manage to find the beneficial insincere revelation, the insincere mechanism is just as good as the optimal truthful one • Otherwise, the insincere mechanism is strictly better (in terms of s.w.) • Holds both for dominant strategies and Bayes-Nash implementation
Proof (in story form) • k of the n employees are needed for a project • Head of organization must decide, taking into account preferences of two additional parties: • Head of recruiting • Job manager for the project • Some employees are “old friends”: • Head of recruiting prefers at least one pair of old friends on team (utility 2) • Job manager prefers no old friends on team (utility 1) • Job manager sometimes (not always) has private information on exactly which k would make good team (utility 3) • (n choose k) + 1 types for job manager (uniform distribution)
Proof (in story form)… Recruiting: +2 utility for pair of friends Job manager: +1 utility for no pair of friends, +3 for the exactly right team (if exists) • Claim: if job manager reports specific team preference, must give that team in optimal truthful mechanism • Claim: if job manager reports no team preference, optimal truthful mechanism must give team without old friends to the job manager (if possible) • Otherwise job manager would be better off reporting type corresponding to such a team • Thus, mechanism must find independent set of k employees, which is NP-complete
Proof (in story form)… Recruiting: +2 utility for pair of friends Job manager: +1 utility for no pair of friends, +3 for the exactly right team (if exists) • Alternative (insincere!) mechanism: • If job manager reports specific team preference, give that team • Otherwise, give team with at least one pair of friends • Easy to execute • To manipulate, job manager needs to solve (NP-complete) independent set problem • If job manager succeeds (or no manipulation exists), get same outcome as best truthful mechanism • Otherwise, get strictly better outcome
u(t, o)? oracle u(t, o) = 3 Criticizing truthful mechanisms… • Suppose utilities can only be computed by (sometimes costly) queries to oracle • Then get similar theorem: • Using insincere mechanism, can shift burden of exponential number of costly queries to agent • If agent fails to make all those queries, outcome can only get better
Is there a systematic approach? • Previous result is for very specific setting • How do we take such computational issues into account in general in mechanism design? • What is the correct tradeoff? • Cautious: make sure that computationally unbounded agents would not make mechanism worse than best truthful mechanism (like previous result) • Aggressive: take a risk and assume agents are probably somewhat bounded • Recent results on these manipulation-optimal mechanisms in [Othman & Sandholm SAGT-09]