150 likes | 345 Views
Approximability of Manipulating Elections. RIT Univ. of Rochester RIT RIT RIT. Eric Brelsford Piotr Faliszewski Edith Hemaspaandra Henning Schnoor Ilka Schnoor. Agenda. Introduction Model of elections Manipulation, Control, and Bribery Complexity barrier approach
E N D
Approximability of Manipulating Elections RIT Univ. of Rochester RIT RIT RIT Eric Brelsford Piotr Faliszewski Edith Hemaspaandra Henning Schnoor Ilka Schnoor
Agenda • Introduction • Model of elections • Manipulation, Control, and Bribery • Complexity barrier approach • Need for approximations • What to approximate? • Goal function • Approximation algorithms • FPTASes • Results • Approximability • Inapproximability
Election E = (C,V) C – candidate set V – voter set Example C = { , , , } V = { > > > , > > > , > > > , > > > } Model of Elections • Scoring protocols α = (α1, … ,αm} Nm Each candidate receives αi points from each voters that ranks him or her at the ith position • Example • plularlity (1,0, …, 0) • Borda count (m-1, m-2, …, 0) • Veto (1, …, 1, 0) • k-approval (1k, 0m-k) • Election rules • Plurality • Borda count • Veto • k-approval • approval • Copeland • Min-max • Dodgson • Young • … families of scoring protocols
Manipulation, Bribery and Control How do we vote to make p a winner? Manipulation: Given an election E = (C,V) and a group of undecided voters W, pick how the voters in W should vote so that their preferred candidate p wins in E’ = (C, V+W). Bribery: Given an election E = (C,V) and a price for each voter, choose a group of voters such that: (a) the joint price of these voters is minimal, and (b) by changing these voters’ votes it is possible to make the preferred candidate p a winner. Control: Given an election E = (C,V), is it possible to make the candidate p a winner via changing the structure of the election, e.g., via adding/deleting candidates/voters.
Elections are endangered by “dishonest” behavior of participating agents Complexity barrier approach If difficult to find a manipulative action … … then perhaps agents won’t be able to Issues with the complexity barrier approach NP-completeness is a worst-case notion Frequency of hardness attacks Complexity Barrier Approach Hmm… maybe finding a good bribery is NP-complete, but only unnatural instances are hard? … or maybe I can somehow approximately find a good cheat…
Idea If we know that a given election problem is NP-complete … … then we should ask: Is finding an approximate solution hard? Main problem: What to approximate? So far all work on approximating manipulation, bribery, and control (see, e.g., [ZPR08, Fal08, Bre07]) used specifically crafted goal functions. Our approach: Uniform framework! Refining the Complexity Barrier Approach
Performance E = (C,V), with protocol α α = ( 5 , 3 , 2 , 0 ) V = { > > > , > > > , > > > , > > > } What to approximate? PerfE’( ) 4 E’ = (C,V+W), with protocol α α = ( 5 , 3 , 2 , 0 ) V = { > > > , > > > , > > > , > > > } = 16 12 PerfE( ) -2 = 8 4 Consider two manipulative votes W = { > > > , > > > } PerfE( ) = -2 β(E,s) = PerfE’( ) - PerfE( ) = 6
More formally… Assumption: We are working with an election system that assigns points to candidates. Candidates with most points win. ScoreE(c) – number of points of candidate c Performance of a candidate: We measure the performance of a candidate as the number of poitns he or she misses to become a winner PerfE(p) = scoreE(p) – max{scoreE(c) | c C} Performance of a solution: We measure the performance of a solution as the increase in the performance of the preferred candidate s – solution E(s) – election E after effectuating solution s β(E, s) = PerfE(s)(p) – PerfE(p) Goal: Maximize β(E, s) Solution: Solution is a description of what action we should effectuate in a given scenario (e.g., how the manipulators should vote, who to bribe)
Is β Really Useful? Candidate’s performance: PerfE(p) = scoreE(p) – max{scoreE(c) | c C} Performance of a solution: β(E, s) = PerfE(s)(p) – PerfE(p) Goal: Maximize β(E, s) Observation 1. Candidate p is a winner of an election E if and only if PerfE(p) ≥ 0. Observation 2. Consider election E, a solution s, and election E(s). By definition, we have: PerfE(s)(p) = β(E, s) + PerfE(p)
For a given election problem (e.g., manipultion) we are interested in the following problem: Input: E = (C,V) election, p – preferred candidate Output: a legal action s that maximizes β(E, s) An ε-approximation algorithm for this problem is an algorithm that outputs a legal solution s’, such that: β(E, s’) ≥ (1-ε)∙max{β(E,s) | s is a legal solution} Approximation Algorithms OPT Acceptable solutions fit in here (1-ε)OPT An FPTAS (fully polynomial-time approximation scheme) is an algorithm that, given a problem instance I and an approximation parameter ε, finds an ε-approximate solution in time polynomial in |I| and 1/ε
Results: Overview Results Manipulation Bribery NP-completeness of bribery for Borda count FPTASes for scoring protocols α, s.t., α1 > α2 Nonexistence of FPTASes Inapproximability of priced bribery for Borda count k-approval + generalizations k-veto
Results: Manipulation Theorem [HH07]. Let α = (α1, …, αm) be a scoring protocol. If α2 > αm then α-weighted manipulation is NP-complete. Otherwise it is in P. Theorem. Let α = (α1, …, αm) be a scoring protocol such that α1 > α2. There is an FPTAS for computing max-β for the case of α-weighted-manipulation. Can we extend this theorem to work for an unbounded number of candidates? Theorem. Unless P = NP, there is no FPTAS for max-β for weighted manipulation in k-veto, k-approval, and generalizations of k-approval Does the result hold for any fixed scoring protocol? Can we get a nice dichotomy result?
Results: Manipulation Theorem [HH07]. Let α = (α1, …, αm) be a scoring protocol. If α2 > αm then α-weighted manipulation is NP-complete. Otherwise it is in P. Theorem. Let α = (α1, …, αm) be a scoring protocol such that α1 > α2. There is an FPTAS for computing max-β for the case of α-weighted-manipulation. Theorem. Let α = (α1, …, αm) be a scoring protocol such that α1 > α2. There is an algorithm that given ε and an instance of α-weighted-manipulation problem I where the preferred candidate p can be made a winner, finds a manipulation that makes p a winner in I, assuming we have one more manipulator whose weight is εW, where W is the largest weight of a manipulator in I.
Results: Bribery in Borda Count Theorem. Bribery is NP-complete for Borda count. Theorem. There is no algorithm that computes an ε-approximate solution for priced version of bribery in Borda count for any constant ε. Comment. Finding an approximate solution for priced bribery in Borda count is hard even if we want polynomial approximation ratio! In essence, the hardness lies in choosing the voters to bribe (almost like control via deleting voters) • Solutions for a bribery problem • find a set of voters to briber • indicate how these voters should now vote
Approximation algorithms Show that even if the problem at hand is NP-hard… … in practice it can be easy to find useful solutions Inapproximability results Still worst-case notion! However, reinforce hardness results Results: Interpretation Thank You!