150 likes | 260 Views
Robust Winners and Winner Determination Policies under Candidate Uncertainty. Joel oren , university of toronto Joint work with Craig boutilier , JÉrôme Lang and HÉctor Palacios . Motivation – Winner Determination under Candidate Uncertainty. 3 voters. 2 voters. 4 voters.
E N D
Robust Winners and Winner Determination Policies under Candidate Uncertainty Joel oren, university of toronto Joint work with Craig boutilier, JÉrôme Lang and HÉctor Palacios.
Motivation – Winner Determination under Candidate Uncertainty 3 voters 2 voters 4 voters • A committee, with preferences over alternatives: • Prospective projects. • Goals. • Costly determination of availabilities: • Market research for determining the feasibility of a project: engineering estimates, surveys, focus groups, etc. • “Best” alternative depends on availableones. ? b a ? b c a ? c c b a b a c Winner c a
Efficient Querying Policies for Winner Determination 3 voters 2 voters 4 voters • Voters submit votes in advance. • Query candidates sequentially, until enough is known in order to a determine the winner. • Example: a wins. ? b a ? b c a c c b a b a c Winner
The Formal Model 3 voters 2 voters b • A set C of candidates. • A vector, , of rankings (a preference profile). • Set is partitioned: • – a priori known availability. • – the “unknown” set. • Each candidate is available with probability . • Voting rule: is the election winner. c a c b a b a c U (unknown) C Y (available) b c a
3 voters 2 voters Querying & Decision Making b c a • At iteration submit query q(x), . • Information set . • Initial available set . • Upon querying candidate : • If available: add to . • If unavailable: remove from . • – restriction of pref. profile to the candidate set . • Stop when is -sufficient – no additional querying will change – the “robust” winner. c b a b a c C a ? ? b c a b 0.5 0.7 0.4
Computing a Robust Winner • Robust winner: Given , is a robust winner if . • A related question in voting: [Destructive control by candidate addition]Candidate set , disjoint spoiler set , pref. profile over , candidate , voting rule . • Question: is there a subset , s.t.? • Proposition: Candidate is a robust winner there is no destructive control against, where the spoiler set is . Y Y x
Computing a Robust Winner • Proposition: Candidate is a robust winner there is no destructive control against, where the spoiler set is . • Implication: Pluarlity, Bucklin, ranked pairs – coNP-complete; Copeland, Maximin-polytimetractable. • Additional results: Checking if is a robust winner for top cycle, uncovered set, and Borda can be done in polynomial time. • Top-cycle & Uncovered set: prove useful criteria for the corresponding majority graph.
The Query Policy • Goal: design a policy for finding correct winner. • Can be represented by a decision tree. • Example for the vote profile (plurality): • abcde, abcde, adbec, • bcaed, bcead, • cdeab, cbade, cdbea a b b a wins c b wins c b a c U c wins a wins a wins d b a b c
Winner Determination Policies as Trees • r-Sufficient tree: • Information set at each leaf is -sufficient. • Each leaf is correctly labelled with the winner. • --cost of querying candidate/node . • – expected cost of policy, over dist. of . a b b a wins c b wins c c wins a wins a wins
Recursively Finding Optimal Decision Trees • Cost of a tree: . • For each node – a training set: Possible true underlying sets A, that agree with . • Example 1: • Example 2: . • Can solve using a dynamic-programming approach. • Running time: -- computationally heavy. a b b a wins c b wins c c wins a wins a wins
Myopically Constructing Decision Trees • Well-known approach of maximizing information gain at every node until reached pure training sets – leaves (C4.5). • Mypoicstep: query the candidate for the highest “information gain” (decrease in entropy of the training set). • Running time:
Empirical Results • 100 votes, availability probability . • Dispersion parameter . ( uniform distribution). • Tested for Plurality, Borda, Copeland. • Preference distributions drawn i.i.d. from Mallows -distribution: probabilities decrease exponentially with distance from a “reference” ranking. Average cost (# of queries)
Empirical Results • Cost decrease as increases – [ less uncertainty about the available candidates set]. • Myopic performed very close to the OPT DP alg. • Not shown: • Cost increases with the dispersion parameter – “noisier”/more diverse preferences (not shown). • -Approximation: stop the recursion when training set is – pure. • For plurality, , , . • For , . Average cost (# of queries)
Additional Results • Query complexity: expected number of queries under a worst-case preference profile. • Result: For Plurality, Borda, and Copeland, worst-case exp. query complexity is . • Simplified policies: Assume for all . Then there is a simple iterative query policy that is asymptotically optimal as .
Conclusions & Future Directions • A framework for querying candidates under a probabilistic availability model. • Connections to control of elections. • Two algorithms for generating decision trees: DP, Myopic. • Future directions: • Ways of pruning the decision trees (depend on the voting rules). • Sample-based methods for reducing training set size. • Deeper theoretical study of the query complexity.