280 likes | 417 Views
Divide and Conquer: false name manipulations in weighted voting games. Bachrach and Elkind. My goals for you. Become familiar with honesty concerns. Realize that complexity plays a big part in multi-agent systems
E N D
Divide and Conquer: false name manipulations in weighted voting games Bachrach and Elkind
My goals for you • Become familiar with honesty concerns. • Realize that complexity plays a big part in multi-agent systems • Give you confidence to pursue other articles which depend on complexity results
Idea common to auctions and other voting methods • What can the participant do to get his/her own way more? Sometimes we term this dishonesty, being untruthful, cheating. To me, it seems being strategic. • in an online English auction, a bidder may wait until the end to submit his bid (called sniping) The goal of sniping is to win at a lower price than true competition would allow.This works because other bidders don’t have time to get their higher bid in.
Idea common to auctions and other voting methods • in Vickrey auctions (a silent auction where the higher bidder wins, but pays the price of the second price bid), the bidders may collude to all bid low (except for the one that valued it the most) • The goal of collusion is that you pay less (as the second price is paid) AND there is no danger of losing the bid (as only someone who values it more than you would bid)
Idea common to auctions and other voting methods • in Borda voting (where points are assigned based on their ranking), a person changes its ranking so a competitor does worse i.e, they really feel a>b>c>d but rank as a >c>d>b • The goal of the false ranking is for you to get the results you wanted. Since the winner is the highest scorer, your false ranking changes the total score of the candidate who you think represents the greatest threat.
The idea • If a voter splits his vote into pieces he may • increase his power • decrease his power Why do we care about the measurement of power? • It may help to convince others of fairness • Object to being a dummy player
We are only changing the measurement of power Why do we care about the measurement of power? • It seems it is like “credit” for affecting the results. Credit may have a monetary compensation. • If voting power is reduced, we could compute the monetary affect this had (say for court settlement)
What to do about the discovery of a possible opportunity for dishonesty? • If you don’t fix the problem, it will be exploited more and more. • You can pick a method which minimizes the problems for your situation. • You can control the “mechanism” - you change the rules so the problems are minimized. This is called “mechanism design” • In voting, we could use a different power index.
False-name manipulation • This paper talks about a person splitting up his votes into multiple pieces. This is possible with on-line voting systems where you never actually meet the individuals. • When does this increase your power as measured by the power index? When does this decrease your power? • How difficult is it to decide whether or not to divide? Sometimes the complexity of cheating discourages cheaters
My concerns with the article? • When you are computing whether you should divide your votes, others are doing the same thing. Thus, it is more complicated than the scenario presented here. • The voter also needs to know the weights of the other voters. How does he find out about everyone else without even divulging whether he is one or two voters? • How are weights determined? Can we assign weights so division won’t help? • Interesting that it is division that helps. We seem to worry about monopolies rather than fragmentation.
Why does splitting help? • The difference obviously comes from the fact that there are more orderings, so in the game [3:2,1,1], if “a” splits, it actually hurts us. If we are split, each agent has the same power (1/4) so the two fragments yield ½. As one chunk, 2/3 of the power is held by the first agent. • In [3:2,1], we get 2/3 (when “a” divides) compared to .5 (when a is one agent)
Proofs • There are several proofs of complexity. Generally, when I first read a paper I skim the proofs to make sure it is worth my time to dig through the proofs. • I worry that this paper may require too much time to understand (for many of you) as you don’t have the background in proofs. • This paper is quite understandable without the proofs as there are lots of examples. Let’s skip most of the details of the proofs in our discussion.
Pseudo Polynomial • Consider the partition problem which asks “Can you select elements (each having a weight) so you get half the total value of all?” This is similar to the knapsack problem which is likely more familiar. • If we consider all possibilities, we look at all possible subsets and see if any one gives us the desired sum. Finding the solution is exponential in the number of items. • Dynamic programming helps. • The solution is “pseudo polynomial” in that its complexity depends (polynomially) on an additional parameter – not generally part of the problem
Consider trying to come up with a set of items which weigh exactly 16 Table ct Each location isSum[r,c] contains: whether or not the first r items can be used to create a subset of weight c.
Consider trying to come up with a set of items which weigh 16 Table ct Each location isSum[r,c] contains: whether or not the first r items can be used to create a subset of weight c. Complexity is 16*N where N is the number of items in the set So what becomes important is the relationship between N and the weight. If Weight is a small multiple of N (for example), we won’t worry about it too much (complexity is N2)
Terms • Pay attention to notation or paper will be very difficult to understand. • They are worried about complexity results. • Polynomially Bounded: Non-exponential. They worry about dividing an agent into fractional pieces. Then if you try to compute the power, you have a number of possible divisions that is not bounded in the problem size. For example, if you are dividing a weight of 5 into two integer pieces, there are few choices: 5 0 4 1 3 2 BUT, if they aren’t integer pieces, there are an infinite number of possibilities to evaluate 5- ε and ε for any ε between 0 and 5.
Terms • To be in NP, you have to be able to check the solution in polynomial time. • NP-complete – means in the class NP and the hardest problems in NP. Basically, exponential. • NP-hard – as hard as NP-Complete, but not necessarily in NP • In proving complexity results, they often talk about doing a reduction from one problem to another. It works this way. You have two problems: one with unknown complexity (U) and one with known complexity (K). Now you map the K problem into the U problem in such a way so that finding an answer to U translates into knowing the solution for the K problem. Then we can say that U is at least as hard as K. • Complexity results are all based on these “reductions” from one problem to another.
Take Aways • Don’t worry so much about the intricate details of the proofs. • Learn enough so you get the major points without having to just skip articles like this. • Try to get the bigger picture – why do we care about complexity? • When someone says a problem is NP-complete – what does that really mean in a practical sense?
Terms • #p complete –(Sharp P complete) as hard as problems in #P • #P asks “how many” rather than “are there any” • An NP problem may ask “Are there any subsets of a list of integers which add up to zero?” • A #P problem asks, “How many subsets add to zero?” • (You can see how this comes up as we are asking, “How many permutations are there in which…”) • Asking “how many” is at least as hard as asking, “are there any?” • #P complete are similar to NP complete in difficulty in that if a polynomial solution existed P=NP
Error? • I think I found an error in one of their examples. Look to see if you agree. • The error I found was in Problem 17. Some of you agree. When agent 2, divides, he actually does WORSE, not better. If both pieces are before agent 1, they just increase the number of times agent 1 is pivotal.
Reductions • Reductions are introduced to prove a problem is just as hard as another. • It works like this. You have two problems: one is hard and of known complexity and the other is unknown. Call them H and U. • We want to map (create a reduction) so that we can learn something about the difficulty of U.
Mapping • Often students get confused which way to map problems. • I have two problems H (hard), and U(unknown), which way do I map the problem to learn something? HU or U H? • H U, as when we solve U, we also solve H, making a solution to U at least as hard as H. • If you map it the other way, the hard problem could be doing MUCH MORE than the easy problem. The easy problem could be solved as a side effect. NP-Completeness
So how do we proceed? • We find a general instance of H and decide on a way to create a problem instance of U. • The problems may look quite different as H and U can be totally different (one may dividing items into two groups and the other may involve voting to get above a threshold) • You explain how finding the solution to U actually tells you the solution to H. So you can use U to find your solution in H.
Let’s look at their reduction • Partition is known to be Hard. weighted voting is unknown. We are mapping a general instance of partition into an instance of voting in such a way that when we solve the voting, we solve the partition problem. • We create the voting problem by saying the weights are 8 times the weights. We add two more voters of weights 1 and 2 and create a threshold of 4*weights + 3.
Partition Problem used in lemma 9 • 1 3 2 2 • Can we divide them in half? • Map to a voting problem as [8,24,16,16, 1,2: 35] . Show 2 is not dummy: 16 16 1 2 ….(2 mattered) SO, {2,2} (the elements corresponding to 16,16 form a partition
Partition Problem • 1 3 3 2 • Can we divide them in half? • Map to a voting problem as [8,24,24,16, 1,2: 39] . Show 2 is dummy: (try all orders) 24,24 24,16 24, 8,1,2,16/24 8,24,16 8,24, 1,2, 16/8 SO, no partition of original numbers
Approximation algorithms • When we try to find a solution to a problem via an approximation algorithm, there are two main forms. • It may give the wrong result with very small probability or bound the amount of the errror. Termed a Monte Carlo algorithm. The method is called after the city in the Monaco principality, because of a roulette, a simple random number generator. The name and the systematic development of Monte Carlo methods dates from about 1944. • Another type of probabilistic algorithm is one that never gives a wrong result, but its running time is not guaranteed. Termed Las Vegas algorithm as you are guaranteed success if you try long enough and don’t care how much you spend.