Results 1 
8 of
8
Intrinsic Robustness of the Price of Anarchy
 STOC'09
, 2009
"... The price of anarchy (POA) is a worstcase measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium ..."
Abstract

Cited by 101 (12 self)
 Add to MetaCart
(Show Context)
The price of anarchy (POA) is a worstcase measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regretminimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a “canonical sufficient condition ” for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regretminimizing players (or “price of total anarchy”). Smoothness arguments also have automatic implications for the inefficiency of approximate and BayesianNash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomiallength bestresponse sequences. We also identify classes of games — most notably, congestion games with cost functions restricted to an arbitrary fixed set — that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worstcase upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with nonpolynomial cost functions, and the first
Price of Anarchy for Greedy Auctions
"... We study mechanisms for utilitarian combinatorial allocation problems, where agents are not assumed to be singleminded. This class of problems includes combinatorial auctions, multiunit auctions, unsplittable flow problems, and others. We focus on the problem of designing mechanisms that approximat ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
We study mechanisms for utilitarian combinatorial allocation problems, where agents are not assumed to be singleminded. This class of problems includes combinatorial auctions, multiunit auctions, unsplittable flow problems, and others. We focus on the problem of designing mechanisms that approximately optimize social welfare at every BayesNash equilibrium (BNE), which is the standard notion of equilibrium in settings of incomplete information. For a broad class of greedy approximation algorithms, we give a general blackbox reduction to deterministic mechanisms with almost no loss to the approximation ratio at any BNE. We also consider the special case of Nash equilibria in fullinformation games, where we obtain tightened results. This solution concept is closely related to the wellstudied price of anarchy. Furthermore, for a rich subclass of allocation problems, pure Nash equilibria are guaranteed to exist for our mechanisms. For many problems, the approximation factors we obtain at equilibrium improve upon the best known results for deterministic truthful mechanisms. In particular, we exhibit a simple deterministic mechanism for general combinatorial auctions that obtains an O(√m) approximation at every BNE.
GSP auctions with correlated types
 In Proceedings of the 12th Annual ACM Conference on Electronic Commerce (EC
, 2011
"... The Generalized Second Price (GSP) auction is the primary method by which sponsered search advertisements are sold. We study the performance of this auction in the Bayesian setting for players with correlated types. Correlation arises very naturally in the context of sponsored search auctions, espec ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
The Generalized Second Price (GSP) auction is the primary method by which sponsered search advertisements are sold. We study the performance of this auction in the Bayesian setting for players with correlated types. Correlation arises very naturally in the context of sponsored search auctions, especiallyasaresultofuncertaintyinherentinthebehaviour of the underlying ad allocation algorithm. We demonstrate that the Bayesian Price of Anarchy of the GSP auction is bounded by 4, even when agents have arbitrarily correlated types. Our proof highlights a connection between the GSP mechanism and the concept of smoothness in games, which may be of independent interest. For the special case of uncorrelated (i.e. independent) agent types, we improve our bound to 2(1−1/e) −1 ≈ 3.16, significantly improving upon previously known bounds. Using our techniques, we obtain the same bound on the performanceofGSPatcoarsecorrelatedequilibria, whichcaptures (for example) a repeatedauction setting in which agents apply regretminimizing bidding strategies. Moreoever, our analysis is robust against the presence of irrational bidders and settings of asymmetric information, and our bounds degrade gracefully when agents apply strategies that form only an approximate equilibrium.
Do Externalities Degrade GSP’s Efficiency?
, 2012
"... We consider variants of the cascade model of externalities in sponsored search auctions introduced independently by Aggrawal et al. and Kempe and Mahdian in 2008, where the clickthrough rate of a slot depends also on the ads assigned to earlier slots. Aggrawal et al. and Kempe and Mahdian give a dy ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider variants of the cascade model of externalities in sponsored search auctions introduced independently by Aggrawal et al. and Kempe and Mahdian in 2008, where the clickthrough rate of a slot depends also on the ads assigned to earlier slots. Aggrawal et al. and Kempe and Mahdian give a dynamic programming algorithm for finding the efficient allocation in this model. We give worstcase efficiency bounds for a variant of the classical Generalized Second Price (GSP) auction in this model. Our technical approach is to first consider an idealized version of the model where an unlimited number of ads can be displayed on the same page; here, Aggrawal et al. and Kempe and Mahdian show that a greedy algorithm finds the optimal allocation. The game theoretic analog of this greedy algorithm can be thought of as a variant of the classical GSP auction. We give the first nontrivial worstcase efficiency bounds for GSP in this model. In the more general model with limited slots, greedy algorithms like GSP can compute extremely bad allocations. Nonetheless, we show that an appropriate extension of the greedy algorithm is approximately optimal, and that the worstcase equilibrium inefficiency in the corresponding analog of GSP also remains bounded. In the context of these models, the GSP mechanisms suffer from two forms of suboptimality: that from using a simple allocation rule (the greedy algorithm) rather than an optimal one (based on dynamic programming), and that from the strategic behavior of the bidders (caused by using the GSP’s critical bid pricing rule rather than one leading to a dominantstrategy implementation). Our results show that for this class of problems, the two causes of efficiency loss can be analyzed separately.
On the Limitations of Greedy Mechanism Design for Truthful Combinatorial Auctions
"... Abstract. We study the combinatorial auction (CA) problem, in which m objects are sold to rational agents and the goal is to maximize social welfare. Of particular interest is the special case in which agents are interested in sets of size at most s (sCAs), where a simple greedy algorithm obtains a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We study the combinatorial auction (CA) problem, in which m objects are sold to rational agents and the goal is to maximize social welfare. Of particular interest is the special case in which agents are interested in sets of size at most s (sCAs), where a simple greedy algorithm obtains an s+1 approximation but no truthful algorithm is known to perform better than O(m / √ log m). As partial work towards resolving this gap, we ask: what is the power of truthful greedy algorithms for CA problems? The notion of greediness is associated with a broad class of algorithms, known as priority algorithms, which encapsulates many natural auction methods. We show that no truthful greedy priority algorithm can obtain an approximation to the CA problem that is sublinear in m, even for sCAs with s ≥ 2. 1
Barriers to nearoptimal equilibria
 IN PROCEEDINGS OF THE 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS
"... This paper explains when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worstcase quality of equilibria in games derived from the problem. We give three families of lower bounds on the quality of equilibria, each moti ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
This paper explains when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worstcase quality of equilibria in games derived from the problem. We give three families of lower bounds on the quality of equilibria, each motivated by a different set of problems: congestion, scheduling, and distributed welfare games; welfaremaximization in combinatorial auctions with “blackbox” bidder valuations; and welfaremaximization in combinatorial auctions with succinctly described valuations. The most straightforward use of our lower bound framework is to harness an existing computational or communication lower bound to derive a lower bound on the worstcase price of anarchy (POA) in a class of games. This is a new approach to POA lower bounds, which relies on reductions in lieu of explicit constructions. More generally, the POA lower bounds implied by our framework apply to all classes of games that share the same underlying optimization problem, independent of the details of players’ utility functions. For this reason, our lower bounds are particularly significant for problems of game design — ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks — where the goal is to design a game that has only nearoptimal equilibria. For example, our results imply that the simultaneous firstprice auction format is optimal among all “simple combinatorial auctions” in several settings.
The Power of Uncertainty: Algorithmic Mechanism Design in Settings of Incomplete Information
, 2011
"... The field of algorithmic mechanism design is concerned with the design of computationally efficient algorithms for use when inputs are provided by rational agents, who may misreport their private values in order to strategically manipulate the algorithm for their own benefit. We revisit classic prob ..."
Abstract
 Add to MetaCart
The field of algorithmic mechanism design is concerned with the design of computationally efficient algorithms for use when inputs are provided by rational agents, who may misreport their private values in order to strategically manipulate the algorithm for their own benefit. We revisit classic problems in this field by considering settings of incomplete information, where the players ’ private values are drawn from publiclyknown distributions. Such Bayesian models of partial information are common in economics, but have been largely unexplored by the computer science community. In the first part of this thesis we show that, for a very broad class of singleparameter problems, any computationally efficient algorithm can be converted without loss into a mechanism that is truthful in the Bayesian sense of partial information. That is, we exhibit a transformation that generates mechanisms for which it is in each agent’s best (expected) interest to refrain from strategic manipulation. The problem of constructing mechanisms for use by rational agents therefore reduces to the design of approximation algorithms without consideration of gametheoretic issues. We furthermore prove that