130 likes | 302 Views
Learning Voting Trees. Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein. Lecture Outline. Voting. PAC. Results. Limitations. Voting and Voting Trees PAC learning Results Limitations. Voting . Voting. PAC. Results. Limitations.
E N D
Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein
Lecture Outline Voting PAC Results Limitations • Voting and Voting Trees • PAC learning • Results • Limitations
Voting Voting PAC Results Limitations • Election: set of voters N={1,...,n}, candidates C={a,b,c,...} • Voters have linear preferences. • Winner of the election determined according to a voting rule F. • Plurality: each voter gives 1 point to first place. • Copeland: • x1 beats x2 in a pairwise election if most voters prefer x1 to x2. • Candidate’s score is num of other candidates beaten in pairwise election.
Tournaments Voting PAC Results Limitations • A tournament over C is complete binary irreflexive relationship over C. • Summarizes results of pairwise elections. • Example: N={1,2,3}, C={a,b,c} • Voter 1: c > b > a • Voter 2: b > a > c • Voter 3: a > c > b • Overall: a < b, b < c, c < a (Condorcet paradox). • (Pairwise) voting rule is a function from tournaments to candidates.
Voting Trees Voting PAC Results Limitations a < b, b < c, c < a ? ? ? c a c ? b a
Voting Trees Voting PAC Results Limitations • Voting trees are everywhere! • Concise representation of (pairwise) voting rules. • In gen., double exponential number Exponential representation. • Capture many rules, such as Copeland. • Given some (pairwise) voting rule, want to find accurate (as much as possible), concise representation by voting tree. • Idea: use learning. Designer labels tournaments with winners, learning algorithm outputs a “good” voting tree.
PAC Learning Voting PAC Results Limitations • Want to learn voting rule f (not necessarily tree). • Training set consists of example pairs (Tj,f(Tj)). • Tj – tournaments drawn from fixed dist. D. • err(h)=PrD[h(T) f(T)]. • f* minimizes err(h) over all voting trees. • Goal: given , find voting tree g such that err(g) err(f*)+. • Q: How many examples are needed in order to guarantee that goal is achieved with prob. at least 1-?
Formulation of Theorems Voting PAC Results Limitations • Theorem: An exponential training set is needed to learn voting trees. • Restrict to the class of voting trees of polynomial size (at most k leaves). • Lemma: If the size of this class is (only) exponential, the following alg achieves the goal with polynomial training set: return the tree which minimizes mistakes on the training set. • Theorem: |Voting trees with k leaves| exp(m,k) • Proof: • size (# possible structures# assignments to leaves) k = (# possible structures mk) k
Number of tree structures Voting PAC Results Limitations
Approximation by Voting Trees Voting PAC Results Limitations • Voting rule g is a c-approximation of f iff f and g agree on a c-fraction of the tournaments. • Theorem: Most voting rules can’t be approximated by small voting trees to a factor of better than ½. • This result isn’t as negative as it sounds.
Closing Remarks Voting PAC Results Limitations • Computational learning theory as a method to concisely represent voting rules. • Other concisely representable families: Scoring rules • Defined by a vector 1,...,m • Efficiently PAC learnable • Which voting rules can be approximated? Under which underlying distributions?
Encore: Computational Complexity Voting PAC Results Limitations • So far were interested in sample complexity. • Recall: If the size of this class is (only) exponential, the following alg achieves the goal with polynomial training set: return the tree which minimizes mistakes on the training set. • Theorem: Finding such a tree is NP-hard. • In practice, the complexity depends on the structure of the tree.
A Graph!! Voting PAC Results Limitations