Results 11  20
of
304
Generalized scoring rules and the frequency of coalitional manipulability
 In Proceedings of the Ninth ACM Conference on Electronic Commerce (EC
, 2008
"... We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This clas ..."
Abstract

Cited by 61 (18 self)
 Add to MetaCart
We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This class is extremely general: we do not know of any commonly studied rule that is not a generalized scoring rule. We then study the coalitional manipulation problem for generalized scoring rules. We prove that under certain natural assump), then tions, if the number of manipulators is O(n p) (for any p < 1 2 the probability that a random profile is manipulable is O(n p − 1 2), where n is the number of voters. We also prove that under another set of natural assumptions, if the number of manipulators is Ω(n p) (for any p> 1) and o(n), then the probability that a random pro2 file is manipulable (to any possible winner under the voting rule) is 1 − O(e −Ω(n2p−1)). We also show that common voting rules satisfy these conditions (for the uniform distribution). These results generalize earlier results by Procaccia and Rosenschein as well as even earlier results on the probability of an election being tied.
Elections Can be Manipulated Often
"... The GibbardSatterthwaite theorem states that every nontrivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the GibbardSatterthwaite theorem: a random manipulation by a single random voter will succeed with nonnegligible probab ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
The GibbardSatterthwaite theorem states that every nontrivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the GibbardSatterthwaite theorem: a random manipulation by a single random voter will succeed with nonnegligible probability for every neutral voting method between 3 alternatives that is far from being a dictatorship.
Determining Possible and Necessary Winners under Common Voting Rules Given Partial Orders
"... Usually a voting rule or correspondence requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a profile of part ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
Usually a voting rule or correspondence requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a profile of partial orders and a candidate c, two important questions arise: first, is c guaranteed to win, and second, is it still possible for c to win? These are the necessary winner and possible winner problems, respectively. We consider the setting where the number of alternatives is unbounded and the votes are unweighted. We prove that for Copeland, maximin, Bucklin, and ranked pairs, the possible winner problem is NPcomplete; also, we give a sufficient condition on scoring rules for the possible winner problem to be NPcomplete (Borda satisfies this condition). We also prove that for Copeland and ranked pairs, the necessary winner problem is coNPcomplete. All the hardness results hold even when the number of undetermined pairs in each vote is no more than a constant. We also present polynomialtime algorithms for the necessary winner problem for scoring rules, maximin, and Bucklin.
Algorithms for the coalitional manipulation problem
 In The ACMSIAM Symposium on Discrete Algorithms (SODA
, 2008
"... We investigate the problem of coalitional manipulation in elections, which is known to be hard in a variety of voting rules. We put forward efficient algorithms for the problem in Scoring rules, Maximin and Plurality with Runoff, and analyze their windows of error. Specifically, given an instance on ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
We investigate the problem of coalitional manipulation in elections, which is known to be hard in a variety of voting rules. We put forward efficient algorithms for the problem in Scoring rules, Maximin and Plurality with Runoff, and analyze their windows of error. Specifically, given an instance on which an algorithm fails, we bound the additional power the manipulators need in order to succeed. We finally discuss the implications of our results with respect to the popular approach of employing computational hardness to preclude manipulation. 1
The Complexity of Bribery in Elections
, 2006
"... We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election’s winner? We study this problem for election systems as varied as scorin ..."
Abstract

Cited by 45 (18 self)
 Add to MetaCart
We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election’s winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding the nature of the voters, the size of the candidate set, and the specification of the input. We obtain both polynomialtime bribery algorithms and proofs of the intractability of bribery. Our results indicate that the complexity of bribery is extremely sensitive to the setting. For example, we find settings where bribing weighted voters is NPcomplete in general but if weights are represented in unary then the bribery problem is in P. We provide a complete classification of the complexity of bribery for the broad class of elections (including plurality, Borda, kapproval, and veto) known as scoring protocols.
Improved Bounds for Computing Kemeny Rankings
 In In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI
, 2006
"... Voting (or rank aggregation) is a general method for aggregating the preferences of multiple agents. One voting rule of particular interest is the Kemeny rule, which minimizes the number of cases where the final ranking disagrees with a vote on the order of two alternatives. Unfortunately, Kemen ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
Voting (or rank aggregation) is a general method for aggregating the preferences of multiple agents. One voting rule of particular interest is the Kemeny rule, which minimizes the number of cases where the final ranking disagrees with a vote on the order of two alternatives. Unfortunately, Kemeny rankings are NPhard to compute. Recent work on computing Kemeny rankings has focused on producing good bounds to use in searchbased methods. In this paper, we extend on this work by providing various improved bounding techniques.
Approximate Mechanism Design Without Money
, 2009
"... The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforc ..."
Abstract

Cited by 43 (15 self)
 Add to MetaCart
The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforcing payments. In this paper, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are ubiquitous, and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost, and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located, and a domain where each agent controls multiple locations.
A sufficient condition for voting rules to be frequently manipulable
 In Proceedings of the Ninth ACM Conference on Electronic Commerce (EC
, 2008
"... The GibbardSatterthwaite Theorem states that (in unrestricted settings) any reasonable voting rule is manipulable. Recently, a quantitative version of this theorem was proved by Ehud Friedgut, Gil Kalai, and Noam Nisan: when the number of alternatives is three, for any neutral voting rule that is f ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
The GibbardSatterthwaite Theorem states that (in unrestricted settings) any reasonable voting rule is manipulable. Recently, a quantitative version of this theorem was proved by Ehud Friedgut, Gil Kalai, and Noam Nisan: when the number of alternatives is three, for any neutral voting rule that is far from any dictatorship, there exists a voter such that a random manipulation—that is, the true preferences and the strategic vote are all drawn i.i.d., uniformly at random—will succeed with a probability of Ω ( 1), where n is the n number of voters. However, it seems that the techniques used to prove this theorem can not be fully extended to more than three alternatives. In this paper, we give a more limited result that does apply to four or more alternatives. We give a sufficient condition for a voting rule to be randomly manipulable with a probability of Ω ( 1) for at least one voter, when the number of alternatives is held n fixed. Specifically, our theorem states that if a voting rule r satisfies 1. homogeneity, 2. anonymity, 3. nonimposition, 4. a cancelingout condition, and 5. there exists a stable profile that is still stable after one given alternative is uniformly moved to different positions; then there exists a voter such that a random manipulation for that voter will succeed with a probability of Ω ( 1). We show that n many common voting rules satisfy these conditions, for example any positional scoring rule, Copeland, STV, maximin, and ranked pairs.
Optimal mechanism design and money burning
 STOC ’08
, 2008
"... Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality—routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spamfighting systems). Service degradation is tantamount to requiring that users burn money, and such “payments ” can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of moneyburning mechanisms to maximize the residual surplus— the total value of the chosen outcome minus the payments required. Our primary contributions are the following. • We define a general template for priorfree optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worstcase analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in priorfree mechanism design. • For general singleparameter agent settings, we char