Results 1  10
of
120
Learning to Order Things
 Journal of Artificial Intelligence Research
, 1998
"... There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a ..."
Abstract

Cited by 324 (13 self)
 Add to MetaCart
There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a twostage approach in which one first learns by conventional means a preference function, of the form PREF(u; v), which indicates whether it is advisable to rank u before v. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NPcomplete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an online learning algorithm, based on the "Hedge" algorithm, for finding a good linear combination of ranking "experts." We use the ordering algorith...
Aggregating inconsistent information: ranking and clustering
, 2005
"... We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the extent of disagreement with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc s ..."
Abstract

Cited by 156 (8 self)
 Add to MetaCart
We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the extent of disagreement with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a longstanding conjecture of BangJensen and Thomassen and show that unless NP⊆BPP, there is no polynomial time algorithm for the problem of minimum feedback arc set in tournaments. 1
Single transferable vote resists strategic voting
, 2003
"... We give evidence that Single Tranferable Vote (STV) is computationally resistant to manipulation: It is NPcomplete to determine whether there exists a (possibly insincere) preference that will elect a favored candidate, even in an election for a single seat. Thus strategic voting under STV is qual ..."
Abstract

Cited by 137 (0 self)
 Add to MetaCart
We give evidence that Single Tranferable Vote (STV) is computationally resistant to manipulation: It is NPcomplete to determine whether there exists a (possibly insincere) preference that will elect a favored candidate, even in an election for a single seat. Thus strategic voting under STV is qualitatively more difficult than under other commonlyused voting schemes. Furthermore, this resistance to manipulation is inherent to STV and does not depend on hopeful extraneous assumptions like the presumed difficulty of learning the preferences of the other voters. We also prove that it is NPcomplete to recognize when an STV election violates monotonicity. This suggests that nonmonotonicity in STV elections might be perceived as less threatening since it is in effect “hidden” and hard to exploit for strategic advantage.
Complexity Results about Nash Equilibria
, 2002
"... Noncooperative game theory provides a normative framework for analyzing strategic interactions. ..."
Abstract

Cited by 130 (10 self)
 Add to MetaCart
Noncooperative game theory provides a normative framework for analyzing strategic interactions.
Complexity of Mechanism Design
, 2002
"... The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfull ..."
Abstract

Cited by 120 (24 self)
 Add to MetaCart
The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully and a (socially) desirable outcome is chosen. We propose an approach where a mechanism is automatically created for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Focusing on settings where side payments are not possible, we show that the mechanism design problem is NPcomplete for deterministic mechanisms. This holds both for dominantstrategy implementation and for BayesNash implementation. We then show that if we allow randomized mechanisms, the mechanism design problem becomes tractable. In other words, the coordinator can tackle the computational complexity introduced by its uncertainty about the agents' preferences by making the agents face additional uncertainty. This comes at no loss, and in some cases at a gain, in the (social) objective.
Junta distributions and the averagecase complexity of manipulating elections
 In AAMAS
, 2006
"... Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used N Pha ..."
Abstract

Cited by 87 (22 self)
 Add to MetaCart
Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used N Phardness as the complexity measure. Such a worstcase analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that N Phard manipulations may be tractable in the averagecase. For this purpose, we augment the existing theory of averagecase complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant. 1.
How hard is it to control an election
 Mathematical and Computer Modeling
, 1992
"... Some voting schemes that are in principle susceptible to control are nevertheless resistant in practice due to excessive computational costs; others are vulnerable. We illustrate this in detail for plurality voting and for Condorcet voting. 1 ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
Some voting schemes that are in principle susceptible to control are nevertheless resistant in practice due to excessive computational costs; others are vulnerable. We illustrate this in detail for plurality voting and for Condorcet voting. 1
Nonexistence of Voting Rules That Are Usually Hard to Manipulate
, 2006
"... ... problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the GibbardSatterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist s ..."
Abstract

Cited by 73 (6 self)
 Add to MetaCart
... problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the GibbardSatterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success, exhibiting rules under which the problem of finding a beneficial manipulation is NP hard, #Phard, or even PSPACEhard. Thus, under these rules, it is unlikely that a computationally efficient algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper, we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly satisfy it. We also discuss approaches for potentially circumventing this impossibility result.
Exact analysis of Dodgson elections: Lewis Carroll’s 1876 voting system is complete for parallel access to NP
 Journal of the ACM
, 1997
"... Abstract. In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters ’ preferences becomes a Condorcet winner—a candidate who beats all other candidates in pairwise majorityrule elections. Bartholdi, Tovey, and Trick provided a lower b ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
Abstract. In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters ’ preferences becomes a Condorcet winner—a candidate who beats all other candidates in pairwise majorityrule elections. Bartholdi, Tovey, and Trick provided a lower bound—NPhardness—on the computational complexity of determining the election winner in Carroll’s system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll’s system is complete for parallel access to NP, that is, it is complete for � 2 p, for which it becomes the most natural complete problem known. It