Results 1  10
of
42
Complexity Results about Nash Equilibria
, 2002
"... Noncooperative game theory provides a normative framework for analyzing strategic interactions. ..."
Abstract

Cited by 130 (10 self)
 Add to MetaCart
Noncooperative game theory provides a normative framework for analyzing strategic interactions.
Complexity of Mechanism Design
, 2002
"... The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfull ..."
Abstract

Cited by 120 (24 self)
 Add to MetaCart
The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully and a (socially) desirable outcome is chosen. We propose an approach where a mechanism is automatically created for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Focusing on settings where side payments are not possible, we show that the mechanism design problem is NPcomplete for deterministic mechanisms. This holds both for dominantstrategy implementation and for BayesNash implementation. We then show that if we allow randomized mechanisms, the mechanism design problem becomes tractable. In other words, the coordinator can tackle the computational complexity introduced by its uncertainty about the agents' preferences by making the agents face additional uncertainty. This comes at no loss, and in some cases at a gain, in the (social) objective.
Partialrevelation VCG mechanism for combinatorial auctions
 In Proceddings of the National Conference on Artificial Intelligence (AAAI
"... Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an ef ..."
Abstract

Cited by 48 (20 self)
 Add to MetaCart
Winner determination in combinatorial auctions has received significant interest in the AI community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders ’ preferences. We introduce a progressive, partialrevelation mechanism that determines an efficient allocation and the Vickrey payments. The mechanism is based on a family of algorithms that explore the natural lattice structure of the bidders ’ combined preferences. The mechanism elicits utilities in a natural sequence, and aims at keeping the amount of elicited information and the effort to compute the information minimal. We present analytical results on the amount of elicitation. We show that no valuequerying algorithm that is constrained to querying feasible bundles can save more elicitation than one of our algorithms. We also show that one of our algorithms can determine the Vickrey payments as a costless byproduct of determining an optimal allocation.
CABOB: A Fast Optimal Algorithm for Winner Determination in Combinatorial Auctions
, 2005
"... Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and ..."
Abstract

Cited by 48 (8 self)
 Add to MetaCart
Combinatorial auctions where bidders can bid on bundles of items can lead to more economically efficient allocations, but determining the winners is NPcomplete and inapproximable. We present CABOB, a sophisticated optimal search algorithm for the problem. It uses decomposition techniques, upper and lower bounding (also across components), elaborate and dynamically chosen bidordering heuristics, and a host of structural observations. CABOB attempts to capture structure in any instance without making assumptions about the instance distribution. Experiments against the fastest prior algorithm, CPLEX 8.0, show that CABOB is often faster, seldom drastically slower, and in many cases drastically faster—especially in cases with structure. CABOB’s search runs in linear space and has significantly better anytime performance than CPLEX. We also uncover interesting aspects of the problem itself. First, problems with short bids, which were hard for the first generation of specialized algorithms, are easy. Second, almost all of the CATS distributions are easy, and the run time is virtually unaffected by the number of goods. Third, we test several random restart strategies, showing that they do not help on this problem—the runtime distribution does not have a heavy tail.
Bargaining with Limited Computation: Deliberation Equilibrium
 ARTIFICIAL INTELLIGENCE
, 2001
"... We develop a normative theory of interactionnegotiation in particularamong selfinterested computationally limited agents where computational actions are game theoretically treated as part of an agent's strategy. We focus on a 2agent setting where each agent has an intractable individual prob ..."
Abstract

Cited by 45 (19 self)
 Add to MetaCart
We develop a normative theory of interactionnegotiation in particularamong selfinterested computationally limited agents where computational actions are game theoretically treated as part of an agent's strategy. We focus on a 2agent setting where each agent has an intractable individual problem, and there is a potential gain from pooling the problems, giving rise to an intractable joint problem. At any time, an agent can compute to improve its solution to its own problem, its opponent's problem, or the joint problem. At a deadline the agents then decide whether to implement the joint solution, and if so, how to divide its value (or cost). We present a fully normative model for controlling anytime algorithms where each agent has statistical performance profiles which are optimally conditioned on the problem instance as well as on the path of results of the algorithm run so far. Using this model, we introduce a solution concept, which we call deliberation equilibrium. It is the perfect Bayesian equilibrium of the game where deliberation actions are part of each agent's strategy. The equilibria differ based on whether the performance profiles are deterministic or stochastic, whether the deadline is known or not, and whether the proposer is known in advance or not. We present algorithms for finding the equilibria. Finally, we show that there exist instances of the deliberationbargaining problem where no pure strategy equilibria exist and also instances where the unique equilibrium outcome is not Pareto efficient.
Computational Criticisms of the Revelation Principle
, 2003
"... The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. We sh ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
The revelation principle is a cornerstone tool in mechanism design. It states that one can restrict attention, without loss in the designer's objective, to mechanisms in which A) the agents report their types completely in a single step up front, and B) the agents are motivated to be truthful. We show that reasonable constraints on computation and communication can invalidate the revelation principle. Regarding A, we show that by moving to multistep mechanisms, one can reduce exponential communication and computation to linearthereby answering a recognized important open question in mechanism design. Regarding B, we criticize the focus on truthful mechanismsa dogma that has, to our knowledge, never been criticized before. First, we study settings where the optimal truthful mechanism is complete to execute for the center. In that setting we show that by moving to insincere mechanisms, one can shift the burden of having to solve the complete problem from the center to one of the agents. Second, we study a new oracle model that captures the setting where utility values can be hard to compute even when all the pertinent information is availablea situation that occurs in many practical applications. In this model we show that by moving to insincere mechanisms, one can shift the burden of having to ask the oracle an exponential number of costly queries from the center to one of the agents. In both cases the insincere mechanism is equally good as the optimal truthful mechanism in the presence of unlimited computation. More interestingly, whereas being unable to carry out either difficult task would have hurt the center in achieving his objective in the truthful setting, if the agent is unable to carry out either difficult task, the value of the center's objec...
Combinatorial auctions with kwise dependent valuations
 In Proc. 20th National Conference on Artificial Intelligence (AAAI05
, 2005
"... We analyze the computational and communication complexity of combinatorial auctions from a new perspective: the degree of interdependency between the items for sale in the bidders’ preferences. Denoting by Gk the class of valuations displaying up to kwise dependencies, we consider the hierarchy G1 ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
We analyze the computational and communication complexity of combinatorial auctions from a new perspective: the degree of interdependency between the items for sale in the bidders’ preferences. Denoting by Gk the class of valuations displaying up to kwise dependencies, we consider the hierarchy G1 ⊂ G2 ⊂ ·· · ⊂ Gm, where m is the number of items for sale. We show that the minimum nontrivial degree of interdependency (2wise dependency) is sufficient to render NPhard the problem of computing the optimal allocation (but we also exhibit a restricted class of such valuations for which computing the optimal allocation is easy). On the other hand, bidders ’ preferences can be communicated efficiently (i.e., exchanging a polynomial amount of information) as long as the interdependencies between items are limited to sets of cardinality up to k, where k is an arbitrary constant. The amount of communication required to transmit the bidders ’ preferences becomes superpolynomial (under the assumption that only value queries are allowed) when interdependencies occur between sets of cardinality g(m), where g(m) is an arbitrary function such that g(m) →∞ as m → ∞. We also consider approximate elicitation, in which the auctioneer learns, asking polynomially many value queries, an approximation of the bidders ’ actual preferences.
Sequences of takeitorleaveit offers: Nearoptimal auctions without full valuation revelation
 In AMECV
, 2003
"... ..."
An Alternating Offers Bargaining Model for Computationally Limited Agents
, 2002
"... An alternating offers bargaining model for computationally limited agents is presented. The gents compute to determine plans, but deadlines restrict them from determining an optimal solution. As the agents compute, they also negotiate over whether to perform a joint plan or whether to act independen ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
An alternating offers bargaining model for computationally limited agents is presented. The gents compute to determine plans, but deadlines restrict them from determining an optimal solution. As the agents compute, they also negotiate over whether to perform a joint plan or whether to act independently and how, if implemented, the value of the joint plan would be divided. Their computing actions and bargaining actions are interrelated and both incorporated into each agent's strategy. We analyze the model for equilibrium strategies for agents under different conditions. It is shown that the equilibrium strategies for the alternating offers model where agents take turns making offers and counteroffers, even with its extremely large action space, are equivalent to those of a much simpler single shot, takeitorleaveit bargaining model. In particular, agents will compute and make no offers until the first agent's deadline.
Automated Mechanism Design: A New Application Area for Search Algorithms
 In Proceedings of the International Conference on Principles and Practice of Constraint Programming (CP 03), Kinsale, County
, 2003
"... Mechanism design is the art of designing the rules of the game (aka. mechanism) so that a desirable outcome (according to a given objective) is reached despite the fact that each agent acts in his own selfinterest. ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Mechanism design is the art of designing the rules of the game (aka. mechanism) so that a desirable outcome (according to a given objective) is reached despite the fact that each agent acts in his own selfinterest.