COMMENTS 10/29/2008

 

 

Andrew Berry

 

This paper examines several voting mechanisms which are difficult to constructively and destructively manipulate with few candidates. The main contribution of this paper is to lay a foundation for voting mechanisms that will force voting participants to elicit truthful social preferences. However, I wonder how much merit this work has practically. The paper shows that plurality, the best-known voting protocol, is the easiest protocol to manipulate. However (referring to the Nader example in the Introduction), doesnÕt the fact that people would rather vote for Nader imply some sort of implicit preference? Perhaps preferences are more complex. Maybe the agent at the beginning of the paper has preferences that align more closely to Kerry | ~Nader > Nader > Kerry > Bush.

 

 I donÕt quite understand why results vary depending on whether an agent is attempting to constructively or destructively manipulate the election outcome. It seems like the results should not differ for the type of manipulation. Also, I am unclear about the Copeland voting protocol. I donÕt quite understand how the Copeland score is computed.

 

 

Ziyad Aljarboua

 

Starting from the assumption that all voting protocols are manipulable, this

paper shows a way to make elections hard to manipulate. The paper defines

voting manipulation as voting according to someone's expected outcome not true preference. However, in an electoral college voting system, knowing that your truly preferred candidate is unlikely to win, would it make a difference if you vote according to your true preference? It seems to me that voting manipulation in a Popular Voting system , where every single vote counts, is more destructive. On the other hand, in a electoral college voting system, unless vote manipulation is carried out on a large scale (collation of voters), manipulation would have minimal effect.

 

In this paper, the discussion is limited to scenarios where candidates are few.

The reason behind this restriction is to reduce the number of possible votes

for a single voter. According to the paper, this restriction will cause all

voter to have equal weight in te election. But, would not they also have equal weight in the case of large number of candidates?

 

The long analysis in the paper concludes that manipulation is avoidable by

employing voting protocols where beneficial manipulations is hard to determine. So, manipulation hardness is proportional to computationally complexity. Also, it was shown the the hardness is dependant on the number of candidates.

 

 

Rory Kulz

 

Well, I have to say, I was pretty happy about Figure 2 and the tables

at the end of this paper. So much to keep track of!

 

The basic idea at the heart of the paper isn't totally out of left

field. After all, numerous cryptographic protocols are based on the

same idea of focusing on computationally hard problems. That being

said, I think it's an interesting notion. Of course, the perfect

information case is almost entirely irrelevant to the real-world (one

thought is electronic voting machines, but there the possibilities for

manipulation are so great and widespread as to make the sorts of

collusion this paper deals with almost irrelevant), but it's nice that

-- despite complete information -- hardness results were obtained for

such low numbers of candidates in several common protocols. This is a

great news for the incomplete information case, and as the authors

point out towards the end, perhaps there can be a better notion of

hardness for such protocols than NP-completeness.

 

This paper is extremely comprehensive, but there are still a couple of

questions I noticed that weren't mentioned in the paper:

 - What's the complexity of destructive CW-manipulation for the

randomized cup protocol? This detail got skipped over; just curious.

 - Hardness results relied primarily on many-one reductions from

2-PARTITION to manipulation. But what about approximation algorithms?

It's not very good if an election can be "approximately" manipulated,

i.e. up to a certain reasonable probability of success.

 

 

Zhenming Liu

 

This paper looks like a grounding work for election manipulation. It shows as much result (from both the Ôeasiness sideÕ and Ôhardness sideÕ) as possible on various voting policies, which makes me recall a KarpÕs famous paper that enumerate through as many NP complete problem as possible. I believe a paper of this type shall be served as a toolkit for further research. And it is really difficult to comment on the value of each individual theorem.

 

Regarding the voting policy, some look really complicated (to me). Maybe these policies are designed with a mindset that the voters are computers, which donÕt mind voting in numerous rounds before the result comes out. Perhaps it is important to address the ÔnaturenessÕ of the voting policies if voters were not computers.

 

Regarding the easiness/hardness results: Perhaps since it is an early grounding work, it makes me feel like I am returning to 30 years ago, where there are only 2 members of interested in complexity zoo: P and NP. RP/BPP is not even there, not mentioning other animals IP, AM, ZKP that sounds will have some connection with social choice theorem. Maybe researches in this area has already been more exciting since this paper is published.

 

I see there is some effort in attempting to develop hardness/easiness result in more generalized models. i.e., only the distribution on the votes is known. And it is interesting to see their arguments in Theorem 15 (if you cannot manipulate, you cannot even do some other tasks that are considered to be fundamental). But in general, the results are not really impressive.

 

 

Alice Gao

 

The main contribution of this paper is to establish theoretical results on the hardness of manipulating elections when the number of candidates is a small constant.  This is important because it uses a more realistic scenario than previous research, in which the number of candidates is unbounded.  This paper is also very successful in establishing the desirable result that manipulation is very hard for certain voting protocols even with a very small number of candidates.

 

The main insight in establishing many of the hardness results was the reduction from the partition problem, which is NP-complete.  This is unlike some previous theoretical papers we read in which completely different approaches are used for proving related theorems.  These unifying approaches make me wonder whether these voting protocols have some common underlying properties.  If these properties can be made explicit, they could help answer the question: what properties make a voting protocol hard/easy to manipulate?  One related discovery the authors is the fact that randomization can make manipulation harder for certain protocols.  This seems to be a first step in answering the fundamental question of characterizing voting protocols based on their manipulability.

 

In my opinion, one major limitation of this paper is that it focuses on the complete information scenario, which is not realistic.  Even though the hardness results for the complete information case can imply hardness results for the case with uncertainty, I still wonder if for the uncertainty case, a group of voters can somehow manipulate the election by approximating the probability of winning for particular candidates.  Moreover, another limitation acknowledged by the authors is that NP-hardness is rather a weak guarantee of hardness, since it does not guarantee that only few instances are easy to manipulate.  This would be an interesting potential area to explore as well.

 

 

Nick Wells

 

This was an interesting paper. The premise that hard-to-manipulate systems make it impractical that voters simply will not manipulate it is interesting though I question the idea given that there are so many different voting agents with overall a lot of computative power. Although, if we assume that very few agents will be able to accomplish this then perhaps on the margin we are indifferent. On the other hand, the results of this paper are as substantive if a voter has more information about the votes his or her peers.

 

Something that potentially interests me is seeing how similar ideas could be

applied but to cases of sequential issues/votes. For example, in a legislature

or a voting council.

 

 

Victor Chan

 

The main contribution of this paper is the evaluation of the hardness of manipulation under various voting protocols and varying degrees of knowledge of how other will vote. The paper looks at two types of manipulation, the constructive coalitional weighted manipulation and also the destructive coalitional weighted manipulation. It was found that all of the protocols examined in the paper were subject to manipulation in P when there are 2 candidates. This make sense since calculating how to manipulate the election, will only require calculating a (n+1)^m! possible votes. In this case m = 2, and sufficiently small. Furthermore, more complex voting protocols such as STV, Borda, and veto, all demonstrated NP-complete hardness with more than 3 candidates. Another of the paper's insight was proving the harndess using a weighted vote. This generalizes the results, so the analysis can be applied to elections where different votes have different priority, ie voting in a board room amongst shareholders vs board of directors.

 

Finally the paper talks about manipulation where there isn't perfect information about how all the non-manipulative voters will vote. It is interesting to note that when there is perfect information, it is just a special case of the distribution of the votes in the non-perfect information setting.

 

The trend in the paper's proofs were to show that given an election, to manipulate it, one would have to calculate the partition based on the votes. This to me almost seems like the weighted knapsack problem, which is known to be NP-hard. Perhaps there is some correlation between the two, as the knapsack problem is the basis of generating keys in cryptography, so that it is sufficiently hard for someone to brute force and break the system.

 

The article also shows that the plurality manipulation is in P for all number of candidates. This is interesting, since it is the election protocol used most by humans. This result gives insight into why strategic voting, or block voting is so prominent in elections. Perhaps this is a suggestion that, our current voting system does not represent the social choice of the voters, but rather a coalition of manipulators? Also it would be interesting to investigate the case,where there are two groups of manipulators, how would that affect the outcome.

 

A project idea would be to examine how these voting protocols can be used in a distributed system. Voting for a leader is an important problem in distributed systems, and many of the voting protocols used are dependent upon truthful nodes in the system (ie bully algorithm). As such, a greedy node can manipulate the system into doing what is best for it. By applying these types of voting schemes into a distributed system where the number of candidates is often very high, it can quickly become unfeasible for a greedy node to manipulate the system.

 

 

Haoqi Zhang

 

The main contribution of this paper is in the characterization of the computationally complexity for manipulating various voting protocols with small number of alternatives when voters' votes are weighted. The authors first establish that in the unweighted case a coalition (or single voter) can find manipulations by enumerating all possible votes, which is polynomial in the number of voters within the coalition. Then, using this result, the authors need only show that manipulators can vote identically to achieve any manipulation effects even with weighted votes and then enumerate over possible votes.  The authors then derive NP-hardness results using partition sum as a reduction to show that a manipulation corresponds 1:1 with solving the partition sum problem.

 

This work is interesting from a theoretical perspective but leaves me with many questions:

- How useful are the results of this work in practice? Are the results strong enough to suggest that we choose one voting rule over another? (e.g., we should probably use majority rule, which satisfies standard axioms over a larger class of preferences than essentially all other voting rules? [see "On the Robustness of Majority Rule", Partha Dasgupta and Eric Maskin] Along another line - given realistic distributions of votes, is it actually difficult to find manipulations? Also, are there cases where it is often quite obvious what a manipulation may be, and/or cases when local search works quite well for finding manipulations?

 

- How hard is it to actually manipulate the outcome of an election? Does the Gibbard and Satterthwaite Theorem suggest that a unilateral manipulation always exists regardless of the preference profile, or only for certain profiles? If it is only for certain profiles, then I think it will be quite nice to characterize how likely is an manipulation to exist as a function of the manipulating coalition (has this already been done?).

 

- The author's NP hardness proof relies heavily on votes having different weights for the reduction to partition sums (the partition problem is trivial for all same weights). While the authors show the extension to unweighted votes in the imperfect information case, I wasn't quite sure how restrictive the perfect correlation requirement is in practice.

 

 

Peter Blair

 

In this paper the authors study various voting protocols for the purpose of determining how suceptible they are to voting manipulation. The two types of manipulations considered are constructive manipulation, where agents work to make a given candidate win and destructive manipulation where agents efforts are directed at making a given candidate lose. A given voting protocol's susceptibility to manipulation is taken to correspond to the computational complexity of manipulating the system. One potential pitfall with this measure, though, is that the complexity of a given scheme corresponds to a finite set of implementations of the voting scheme, which means that there can be implementations of the scheme that can be solved with less difficulty than the complexity results derived in the paper. The general results for the voting schemes studied are summarized in a tables on pg 26 and p27. Not surprisingly, a general result is that the complexity of constructive manipulation is higher than that for destructive manipulation -- indeed a reasonable result given that with destructive manipulation the focus is on one candidate losing, where as in constructive manipulations, the agent must consider all candidates and the mans for his/her prefered candidate to beat out the other contenders. An interesting result of this paper is that a voting protocol governed by plurality can be solved in polynomial time,w erheas a plurailty with runoff is NP-complete. In the US there exists a plurality system, whereas in some European countries such as France there exists a plurality with runoff possibility. I'm interested in what the effects of having a runoff + plurality system may have on incentivizing third party candidates. Also the 2000 election should be an interesting case study in manipulation, in which the popular vote and electoral college both determined different winners.

 

 

Nikhil Srivastava

 

Conitzer et al present a challenging but interesting set of results on the complexity of manipulating a variety of voting schemes for different numbers of candidates. Their results and proofs are piecemeal, usually specific to each voting rule and for a given candidate number, but especially useful given the fact that real-world elections rarely have a large, finite set of outcomes. They also deal with variants like weighted voting and the difference between constructive and destructive manipulation.

 

The motivating idea behind the paper is to computationally prohibit manipulation. I wonder about the feasibility of a *hidden* voting scheme where the exact details are unknown to the voters. This seems a little un-democratic, but perhaps a guarantee that the scheme falls within a certain parameter range might provide enough voter comfort while still making manipulation difficult or even impossible. A related idea: divide the voters randomly into m subsets, each with a different voting scheme, and combine the m results in some other fashion to reach a decision. A would-be manipulator does not know into which subset his vote will fall and has a much more difficult task.

 

 

Michael Aubourg

 

I would raise two points linked to this papers :

-To what extend do polls influence voters' choices ?

-When is the strategic vote frequent ? When did it happen ?

 

The first think I am thinking about is this point :

Manipulation is an undesirable phenomenon. So is strategic votes.

And when do people vote strategically ? When they have information about the expected results, or strong believes.

During pre-election periods, this information mostly comes from polls and surveys.

Whe should wonder about polls' role. Is it good ? Maybe should polls be forbidden a month or a week before an election ?

What poll's role ? First of all, they are used by the candidate to compared themselves to their rivals. Then it's used by the media :

Indeed, media hardly care about a candidate with few chances to elected. Finally it give information to electors. And this can lead to the strategic vote.

This has bad  side-effect : When a candidate starts badly, he is even more likely to be worse. It 's the same think for candidates doing good.

 

Polls' publication tends to amplify the spread between candidates.

In France for instance, media play a major role, since the behave as Primary elections by separating candidates into two class "the potential winners" and "the ones doomed to lose"

 

What about the USA ? As the USA has by and large, a two party system, there is almost no opportunity for tactical voting to occur in general elections.

In the Canadian election, 2004 and to a lesser extent in the Canadian General election 2006, strategic voting was a concern for the federal New Democratic Party. In the 2004 election, the governing Liberal Party was able to convince many New Democratic voters to vote Liberal in order to avoid a Conservative government.

 

So politicians should consider strategic voting as a danger/friend. They should always be aware of it, as soon as the election system is not a two party system.

 

 

Brian Young

 

When Are Elections with Few Candidates Hard to Manipulate? (Conitzer, Sandholm, Lang)

 

I really enjoyed this article, but I found it very dense; there were a great number of things to keep in mind, and I'm not sure I retained everything, even on my second or third read. Here are some questions I had in mind while reading, and I hope I didn't miss a place where the paper addressed them.

 

1. Even in the cases when finding a successful manipulation strategy is NP-hard, might there be a reasonably good approximation strategy that is in P?

 

2. The paper considered the situation where every voter was either voting his or her true preferences or voting as a member of a coalition, and there was only one coalition in play. What would be the effects of adding multiple coalitions, potentially with competing goals?

 

3. We saw in discussions of prediction markets that there existed a tradeoff between making the market responsive enough to reflect agents' true preferences and making it difficult to manipulate through strategic play. Does a similar problem arise here; that is, if we make the voting protocol difficult to manipulate, will we also cause it to reflect something less than the true aggregate preferences?

 

4. I'm not convinced that there's an easy or natural way to introduce more candidates, or even more viable candidates. To take the example from the paper's introduction (the Nader - Kerry - Bush example), if a third (or fourth, or fifth) candidate is truly a non-viable candidate, why should a rational voter, even one who would prefer that candidate, support that candidate? And if a voting protocol actually gives that candidate a high enough probability of winning that our voter would find it worthwhile to support that candidate, wouldn't that lead to, again, an inaccurate (or inaccurate with higher probability) aggregation of the voters' preferences?

 

 

Hao-Yuh Su

 

First, this paper investigates the complete-information coalition manipulation problems of weighted voters. The manipulation can be either constructive or destructive. This paper studies the number of candidates needed in each voting protocol such that the hardness of manipulation occurs. Second, it tries to find the condition where evaluating a manipulation is NP-hard.

 

Before deriving his results, this author made an assumption that the voters will not change their preference once they submitted it, even when there are several rounds through out the voting process, say under the cup protocol. It may be a reasonable assumption, but it may not be practical in my opinion. In reality, voters may change their preference from one round to another. Here, I have a question. How should we model the problem if the preference profile changes from round to round? If we say that voters have complete information, does this also mean that those changes are known among voters?

 

My second question is about the definition of coalition manipulation. According to the definition, it is said that, given a set of the manipulators' votes T, a set of others' vote S and a preferred candidate p, there exists a coalition manipulation if there exists a way to cast the votes in T so that p wins. Is there any restriction on the size of the set of the manipulators? Or we can say it's a coalition manipulation even if there are only two manipulators given a certain number of voters? Or there is minimum number of manipulators needed in different situation?

 

 

Xiaolu Yu

 

Truthful preference aggregation is a central issue in multiagent settings. Voting is generally used as the tool to realize such information aggregation, but the downside is obvious: all general voting protocols are manipulable. The main contribution of this paper is studying and deriving hardness results for realistic elections where the number of candidates is a small constant, rather than unbounded assumed in earlier work. Motivated by the fact that the hardness of manipulation can be reasonably measured by computational complexity among computational agents, the author addressed whether a given protocol is hard to manipulate, and how many candidates are needed for the hardness to occur (the lower this number, the less manipulable the protocol).

 

One important issue the article raised is that manipulation is hard in the common case under uncertainty about others' votes. The author thus suggested people would be generally less concerned about manipulation given that in the realistic voting case, not too much is known about how other will vote. Interestingly, although I think this is correct to some extent, we still see manipulation among various voting process. How could that happen? One possibility could be weights assigned to voters as far as I concern. In the "subsequent work" section, the author mentioned "expanding the definition of a voting rule", in which the pivotal voters are banished in such a way that they will not enjoy or suffer from the chosen candidate, and concluded that in this way truthful voting is a weakly dominant strategy. This seems to me an over constrained condition to circumvent the problem. On the one hand, if voters don't benefit or hurt by choosing candidates, then why vote anyway? It is possible that their votes would be "manipulated" by other voters whose interests are intertwined with which candidates who are going to win. On the other hand, if weights of voters are assigned in a "smarter" way, say, the weight of a given coalition is somehow not equal to the sum of the weight of each individual within the coalition, in a sense that the manipulable interests is somehow minimized by dynamically assign weights among coalitions and individuals, then manipulation will be damped to a certain extent.

 

The article shows different degrees of tolerance of general voting protocols to manipulation. I am quite interested by the future/subsequent work. The very last possible direction for future works proposed in the article suggests voting rules that are hard to execute. This is a very involving method to think about, given its bright side – the winner needs to be determined only once, while the manipulation algorithm needs to simulate the voting rule multiple times. I would like to learn more about the feasibility of this method.

 

 

Brett Harrison

 

This paper provides many interesting complexity results about voting rules that are computationally hard to manipulate.

 

I wonder which, if any, of these scoring rules (and their corresponding complexity results) could be put into practice in actual political elections. Are there scoring rules for which manipulation is hard in such a situation, where voters only have probabilistic estimates of other voters' future decisions (i.e. polls)? There is certainly room for manipulation in the American political voting system (but perhaps this manipulation is negligible since there are so many voters), and I am interested to know if the mathematics in this paper could create a theoretical voting scheme that could in practice be deployed.

 

I also would like to see further results that state the extent of possible manipulation. That is, it may be possible for a voter to manipulate the outcome, but maybe he can only influence the relative positions of two candidates and not all of them. This is just one example of a type of result that would come to terms with manipulation by saying that "in the worst case, the amount of manipulation is not too bad".

 

 

Sagar Mehta

 

This paper explores how voting mechanisms with a finite number of people and candidates can be made computationally hard to manipulate. There are two main avenues used to achieve this result - elections where votes are weighted and settings where evaluating a manipulation is NP-hard (i.e. what if we have a probability distribution of the non-colluders' votes?). This paper significantly adds to previous research on this topic because, as the authors mentioned, earlier work assumed that the number of votes and candidates is unbounded.

 

Much of the paper spends time describing the computational complexity of manipulation for various voting mechanism where voters have different weights. The paper also considers situations where each player has perfect knowledge of each other player's preferences. Again, these scenarios are unlikely to occur in real-life elections. The paper did however give important theoretical results on the computational complexity of manipulation in various voting mechanisms. I'd be interesting in classifying exactly what determines the "hardness" of a voting mechanism and the number of candidates before we transition from P to NP?

 

I also had a question about the general nature of how we vote for president. The purpose of a voting scheme is to choose a winner based on preference elicitation. So, if more people prefer Barack Obama to John McCain, Barack Obama should win the election. However, philosophically, in an election with more candidates, voting does poorly in maximizing utility. For instance, suppose Barack Obama supporters gain some utility out of a President Obama and zero utility out of Pres. McCain, John McCain supporters gain some utility out of a President McCain and zero out of Pres Obama, and all people get some equal amount of utility out of a Pres. Clinton. Theoretically, a utility maximizing benefactor might pick Clinton if the sum of the utilities is greatest for Clinton, however this is not how the plurality method works and so we could end up with the less total utility than socially optimal outcome.

 

 

Avner May

 

The authors of this article approach the problem of evaluating the merits a certain voting protocols from a very practical angle.  Instead of asking whether a certain voting protocol is theoretically manipulable, they ask how hard it is to manipulate a given protocol (with a certain number of candidates).  The motivation behind this is that often it might be too difficult (computationally speaking) for a group to actually manipulate an election, even if it may theoretically be manipulable.  They investigate both destructive and constructive manipulation, as well as the effects of uncertainty about other votes on the complexity of manipulation.  In general, I found the work in this paper to be very valuable; it performed a very broad overview of manipulation complexity with many common voting protocols.  It is definitely a paper which I believe would be useful in elections where there were high stakes and many fears of manipulation.  I also found the results to be very applicable to the current election; the US uses a combination of plurality protocol, and weighted plurality protocol (where the states are the voting agents, and they cast varying numbers of Electoral College votes).  I find it slightly disconcerting that this is a very easy protocol to manipulate; independent-party voters have no incentive to vote for their candidate other than to make a point, and people who have residence in multiple states get to choose which state to vote in (thus adding an element of strategy to an election, which should not exist if the election is purely about honestly gathering preferences).  I think the results are very valuable in the case of human voters; if it is unintuitive, or not at all obvious, how a person/group could manipulate an election, I think they will be much less likely to do so.  I was intrigued by the possibilities mentioned at the end of this article of making the actual problem of computing the winner computationally difficult as a way of preventing manipulation, or of trying to embed a hard problem, like factoring, in the manipulation problem.  I think these could be interesting questions to approach from a cryptographic perspective.

 

 

Malvika Rao

 

Several questions come to mind on reading this paper. One rather obvious

question is the average case complexity of these manipulation schemes. A

manipulation might be NP-hard to achieve but those pathological cases

might be isolated and the majority of cases might be far more easily

manipulable.

 

If we accept that most voting schemes are manipulable, then it might be

worthwhile to ask what we can do to limit the damage. Can we somehow

detect that manipulation has taken place? Can we somehow normalize the

result to reflect that manipulation has taken place? Rather than trying to

design voting schemes that resist manipulation (hard to design) why not

concentrate on designing voting schemes that allow manipulation but can

guarantee certain limits on the extent of the manipulation. This might

have an application somewhere.

 

In this paper they consider elections with few candidates. But we can ask

also about the number of manipulators. How many manipulators are needed to

accomplish an effective manipulation? what should be their distribution

and/or timing? For example, in a 2 alternative majority vote (alternatives

A and B), suppose that 75% of voters have already voted truthfully. And

90% of these voted for alternative A. At this stage if the remaining 25%

of voters vote to manipulate the election, this manipulation will have no

effect on the outcome. One might imagine that a manipulation scheme which

requires a majority of voters to manipulate in order to succeed is

unlikely to happen.

 

The awareness of voters regarding whether others are manipulative or not

might change over time. The first few voters might not be aware whereas

later voters might become aware. See Joseph Halpern's work on modeling

awareness as a game.

 

A good question raised in class is whether certain types of manipulation

are really that undesirable. We might want to first clearly describe and

define what is manipulation and when it is "wrong". In the example where a

voter knows his top choice has no chance of being selected and therefore

votes for his second choice, it appears to me that the voter is simply

acting in a self-interested manner. In mechanism design the agents are

assumed to be self-interested and the whole point is in fact to design a

mechanism that nonetheless manages to maximize social welfare despite the

agents' self-interested behaviour (through incentives).

 

Collusion might come with a cost. It would be interesting to examine the

"cost of collusion" and how many colluders are needed to effect a

successful manipulation. There is research on collusion resistant

mechanisms with regards to auctions. Perhaps this might be applied to

designing voting systems that are collusion resistant up to a certain

degree (but not necessarily manipulation resistant).

 

Slightly unrelated to manipulation, but it would be interesting to look

into "dynamic" voting models.

 

 

Travis May

 

Preference aggregation through elections is extremely difficult (if not

impossible) to implement, as all election rules are manipulable.  This paper

suggests that two solutions to this problem have been presented in the past:

first, constraining the utility functions of the voters, and second, randomly

selecting a "dictator" who makes the ultimate decision.  The paper proposes a

third, practical solution - making manipulation computationally difficult to

the extent that it becomes impossible for agents to perform.  The paper breaks

down various voting rules and evaluates the number of candidates required to

make a strategic voting calculation NP-hard.

 

I would like to propose a different alternative, along the same lines but

inolving randomization.  Instead of randomizing the dictatorial agent to

prevent manipulation, one could instead randomize the voting rules themselves.

Ideally, the randomization could be done between two voting rules that cause

opposite strategies, causing those two strategies to offset each other exactly.

 If this cannot be done, then the randomization should at least go a long way in

making the problem more computationally difficult to compute.

 

I'd also like to add an interesting story about vote manipulation.  Decades ago,

Harvard house assignments were assigned based on preferences.  Harvard quickly discovered that students were not revealing true preferences, and they were making strategic rankings, so they tried to change the system.  They played

with several systems for the next few years, until they realized that the

students were consistently able to game it.  Now, the system is completely

randomized and students have no say on their allocations.