Results 1  10
of
77
Internet Advertising and the Generalized Second Price Auction: Selling Billions of Dollars Worth of Keywords
 AMERICAN ECONOMIC REVIEW
, 2007
"... We investigate the “generalized secondprice” (GSP) auction, a new mechanism used by search engines to sell online advertising. Although GSP looks similar to the VickreyClarkeGroves (VCG) mechanism, its properties are very different. Unlike the VCG mechanism, GSP generally does not have an equilib ..."
Abstract

Cited by 553 (20 self)
 Add to MetaCart
(Show Context)
We investigate the “generalized secondprice” (GSP) auction, a new mechanism used by search engines to sell online advertising. Although GSP looks similar to the VickreyClarkeGroves (VCG) mechanism, its properties are very different. Unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truthtelling is not an equilibrium of GSP. To analyze the properties of GSP, we describe the generalized English auction that corresponds to GSP and show that it has a unique equilibrium. This is an ex post equilibrium, with the same payoffs to all players as the dominant strategy equilibrium of VCG.
Mechanism design via differential privacy
 Proceedings of the 48th Annual Symposium on Foundations of Computer Science
, 2007
"... We study the role that privacypreserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differen ..."
Abstract

Cited by 206 (3 self)
 Add to MetaCart
We study the role that privacypreserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacy [15, 14], in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacypreserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero. 1
Competitive Auctions
"... We study a class of singleround, sealedbid auctions for an item in unlimited supply, such as adigital good. We introduce the notion of competitive auctions. A competitive auction is truthful (i.e., encourages bidders to bid their true valuations) and on all inputs yields profit that is withina co ..."
Abstract

Cited by 116 (12 self)
 Add to MetaCart
We study a class of singleround, sealedbid auctions for an item in unlimited supply, such as adigital good. We introduce the notion of competitive auctions. A competitive auction is truthful (i.e., encourages bidders to bid their true valuations) and on all inputs yields profit that is withina constant factor of the profit of the optimal single sale price. We justify the use of optimal single price profit as a benchmark for evaluating a competitive auctions profit. We exhibitseveral randomized competitive auctions and show that there is no symmetric deterministic competitive auction. Our results extend to bounded supply markets, for which we also givecompetitive auctions.
Algorithmic pricing via virtual valuations
 In Proc. of 8th EC
, 2007
"... Algorithmic pricing is the computational problem that sellers (e.g., in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami et al. [9] propose this problem and give logarithmic approximations (in the number of consumers) ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
(Show Context)
Algorithmic pricing is the computational problem that sellers (e.g., in supermarkets) face when trying to set prices for their items to maximize their profit in the presence of a known demand. Guruswami et al. [9] propose this problem and give logarithmic approximations (in the number of consumers) for the unitdemand and singleparameter cases where there is a specific set of consumers and their valuations for bundles are known precisely. Subsequently several versions of the problem have been shown to have polylogarithmic inapproximability. This problem has direct ties to the important open question of better understanding the Bayesian optimal mechanism in multiparameter agent settings; however, for this purpose approximation factors logarithmic in the number of agents are inadequate. It is therefore of vital interest to consider special cases where constant approximations are possible. We consider the unitdemand variant of this pricing problem. Here a consumer has a valuation for each different item and their value for a set of items is simply the maximum value they have for any item in the set. Instead of considering a set of consumers with precisely known preferences, like the prior algorithmic pricing literature, we assume that the preferences of the consumers are drawn from a distribution. This is the standard assumption in economics; furthermore, the setting of a specific set of customers with specific preferences, which is employed in all of the prior work in algorithmic pricing, is a special case of this general Bayesian pricing problem, where there is a discrete Bayesian distribution for preferences specified by picking one consumer uniformly from the given set of consumers. Notice that the distribution over the valuations for the individual items that this generates is obviously correlated. Our work complements these existing works by considering the case where the consumer’s valuations for the different items are independent random variables. Our main
Worstcase optimal redistribution of VCG payments
 In Proceedings of the ACM Conference on Electronic Commerce (EC
, 2007
"... For allocation problems with one or more items, the wellknown VickreyClarkeGroves (VCG) mechanism is efficient, strategyproof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents ’ payments will sum to more than ..."
Abstract

Cited by 57 (18 self)
 Add to MetaCart
(Show Context)
For allocation problems with one or more items, the wellknown VickreyClarkeGroves (VCG) mechanism is efficient, strategyproof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents ’ payments will sum to more than 0. If there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer. However, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents. In 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategyproofness, individual rationality, and the
A knapsack secretary problem with applications
 In APPROX ’07
, 2007
"... Fellowship. Portions of this work were completed while the author was a postdoctoral ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Fellowship. Portions of this work were completed while the author was a postdoctoral
Bayesian Algorithmic Mechanism Design
, 2010
"... The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatib ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a BayesNash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in singleparameter agent settings, we give a general blackbox reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same1 approximation factor.
Better redistribution with inefficient allocation in multiunit auctions with unit demand
 In Proceedings of the ACM Conference on Electronic Commerce (EC
, 2008
"... For the problem of allocating one or more items among a group of competing agents, the VickreyClarkeGroves (VCG) mechanism is strategyproof and efficient. However, the VCG mechanism is not strongly budget balanced: in general, value flows out of the system of agents in the form of VCG payments, w ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
(Show Context)
For the problem of allocating one or more items among a group of competing agents, the VickreyClarkeGroves (VCG) mechanism is strategyproof and efficient. However, the VCG mechanism is not strongly budget balanced: in general, value flows out of the system of agents in the form of VCG payments, which reduces the agents ’ utilities. In many settings, the objective is to maximize the sum of the agents’ utilities (taking payments into account). For this purpose, several VCG redistribution mechanisms have been proposed that redistribute a large fraction of the VCG payments back to the agents, in a way that maintains strategyproofness and the nondeficit property. Unfortunately, sometimes even the best VCG redistribution mechanism fails to redistribute a substantial fraction of the VCG payments. This results in a low total utility for the agents, even though the items
On the Competitive Ratio of the Random Sampling Auction
 In Proc. 1st Workshop on Internet and Network Economics
, 2005
"... Abstract. We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worstcase competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We give a simple analysis of the competitive ratio of the random sampling auction from [10]. The random sampling auction was first shown to be worstcase competitive in [9] (with a bound of 7600 on its competitive ratio); our analysis improves the bound to 15. In support of the conjecture that random sampling auction is in fact 4competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal. 1 Introduction. Random sampling is the most prevalent technique for designing auctions to maximize the auctioneer’s profit when the bidders ’ valuations are a priori unknown [2–4, 7, 8, 10, 11]. The first and simplest application of random sampling to auctions is in the context of auctioning a digital good. 5 For this problem, the random
An Optimal Lower Bound for Anonymous Scheduling Mechanisms
"... We consider the problem of designing truthful mechanisms to minimize the makespan on m unrelated machines. In their seminal paper, Nisan and Ronen [14] showed a lower bound of 2, and an upper bound of m, thus leaving a large gap. They conjectured that their upper bound is tight, but were unable to p ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of designing truthful mechanisms to minimize the makespan on m unrelated machines. In their seminal paper, Nisan and Ronen [14] showed a lower bound of 2, and an upper bound of m, thus leaving a large gap. They conjectured that their upper bound is tight, but were unable to prove it. Despite many attempts that yield positive results for several special cases, the conjecture is far from being solved: the lower bound was only recently slightly increased to 2.61 [5, 10], while the best upper bound remained unchanged. In this paper we show the optimal lower bound on truthful anonymous mechanisms: no such mechanism can guarantee an approximation ratio better than m. This is the first concrete evidence to the correctness of the NisanRonen conjecture, especially given that the classic scheduling algorithms are anonymous, and all stateoftheart mechanisms for special cases of the problem are anonymous as well.