Results 1  10
of
20
Bayesian Algorithmic Mechanism Design
, 2010
"... The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatib ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a BayesNash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in singleparameter agent settings, we give a general blackbox reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same1 approximation factor.
Bayesian combinatorial auctions: Expanding single buyer mechanisms to many buyers
 In FOCS. 512–521
"... • Bronze Medal, 13th International Olympiad in Informatics, Tampere, Finland, ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
• Bronze Medal, 13th International Olympiad in Informatics, Tampere, Finland,
Bayesian Incentive Compatibility via Fractional Assignments
"... Very recently, Hartline and Lucier [14] studied singleparameter mechanism design problems in the Bayesian setting. They proposed a blackbox reduction that converted Bayesian approximation algorithms into BayesianIncentiveCompatible (BIC) mechanisms while preserving social welfare. It remains a ma ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Very recently, Hartline and Lucier [14] studied singleparameter mechanism design problems in the Bayesian setting. They proposed a blackbox reduction that converted Bayesian approximation algorithms into BayesianIncentiveCompatible (BIC) mechanisms while preserving social welfare. It remains a major open question if one can find similar reduction in the more important multiparameter setting. In this paper, we give positive answer to this question when the prior distribution has finite and small support. We propose a blackbox reduction for designing BIC multiparameter mechanisms. The reduction converts any algorithm into an ɛBIC mechanism with only marginal loss in social welfare. As a result, for combinatorial auctions with subadditive agents we get an ɛBIC mechanism that achieves constant approximation. 1
Bayesian Optimal Auctions via Multi to Singleagent Reduction
, 1203
"... We study an abstract optimal auction problem for a single good or service. This problem includes environments where agents have budgets, risk preferences, or multidimensional preferences over several possible configurations of the good (furthermore, it allows an agent’s budget and risk preference t ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We study an abstract optimal auction problem for a single good or service. This problem includes environments where agents have budgets, risk preferences, or multidimensional preferences over several possible configurations of the good (furthermore, it allows an agent’s budget and risk preference to be known only privately to the agent). These are the main challenge areas for auction theory. A singleagent problem is to optimize a given objective subject to a constraint on the maximum probability with which each type is allocated, a.k.a., an allocation rule. Our approach is a reduction from multiagent mechanism design problem to collection of singleagent problems. We focus on maximizing revenue, but our results can be applied to other objectives (e.g., welfare). An optimal multiagent mechanism can be computed by a linear/convex program on interim allocation rules by simultaneously optimizing several singleagent mechanisms subject to joint feasibility of the allocation rules. For singleunit auctions, Border (1991) showed that the space of all jointly feasible interim allocation rules for n agents is a Ddimensional convex polytope which can be specified by 2D linear constraints, where D is the total number of all agents’
When LP is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings (Extended Abstract)
"... Abstract. Consider a random graph model where each possible edge e is present independently with some probability pe. We are given these numbers pe, and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Consider a random graph model where each possible edge e is present independently with some probability pe. We are given these numbers pe, and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most ti times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LProunding based constantfactor approximation algorithms for these problems. Our main results are: • We give a 5.75approximation for weighted stochastic matching on general graphs, and a 5approximation on bipartite graphs. This answers an open question from [Chen et al. ICALP 09]. • Combining our LProunding algorithm with the natural greedy algorithm, we give an improved 3.88approximation for unweighted stochastic matching on general graphs and 3.51approximation on bipartite graphs. • We introduce a generalization of the stochastic online matching problem [Feldman et al. FOCS 09] that also models preferenceuncertainty and timeouts of buyers, and give a constant factor approximation algorithm. 1
Approximation Schemes for Sequential Posted Pricing in MultiUnit
, 2010
"... We design algorithms for computing approximately revenuemaximizing sequential postedpricing mechanisms (SPM) in Kunit auctions, in a standard Bayesian model. A seller has K copies of an item to sell, and there are n buyers, each interested in only one copy, who have some value for the item. The se ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We design algorithms for computing approximately revenuemaximizing sequential postedpricing mechanisms (SPM) in Kunit auctions, in a standard Bayesian model. A seller has K copies of an item to sell, and there are n buyers, each interested in only one copy, who have some value for the item. The seller must post a price for each buyer, the buyers arrive in a sequence enforced by the seller, and a buyer buys the item if its value exceeds the price posted to it. The seller does not know the values of the buyers, but have Bayesian information about them. An SPM specifies the ordering of buyers and the posted prices, and may be adaptive or nonadaptive in its behavior. The goal is to design SPM in polynomial time to maximize expected revenue. We compare against the expected revenue of optimal SPM, and provide a polynomial time approximation scheme (PTAS) for both nonadaptive and adaptive SPMs. This is achieved by two algorithms: an efficient algorithm that gives a (1 − 1 √)approximation (and hence a PTAS for sufficiently 2πK large K), and another that is a PTAS for constant K. The first algorithm yields a nonadaptive SPM that yields its approximation guarantees against an optimal adaptive SPM – this implies that the adaptivity gap in SPMs vanishes as K becomes larger. 1
Bayesian Mechanism Design for BudgetConstrained Agents
, 2011
"... We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent’s utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This d ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent’s utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This discontinuity in the utility function presents a significant challenge in the design of good mechanisms, and classical “unconstrained ” mechanisms fail to work in settings with budgets. The goal of this paper is to develop general reductions from budgetconstrained Bayesian MD to unconstrained Bayesian MD with small loss in performance. We consider this question in the context of the two most wellstudied objectives in mechanism design—social welfare and revenue—and present constant factor approximations in a number of settings. Some of our results extend to settings where budgets are private and agents need to be incentivized to reveal them truthfully.
Priorindependent multiparameter mechanism design
 In Workshop on Internet and Network Economics (WINE
, 2011
"... Abstract. In a unitdemand multiunit multiitem auction, an auctioneer is selling a collection of different items to a set of agents each interested in buying at most unit. Each agent has a different private value for each of the items. We consider the problem of designing a truthful auction that m ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In a unitdemand multiunit multiitem auction, an auctioneer is selling a collection of different items to a set of agents each interested in buying at most unit. Each agent has a different private value for each of the items. We consider the problem of designing a truthful auction that maximizes the auctioneer’s profit in this setting. Previously, there has been progress on this problem in the setting in which each value is drawn from a known prior distribution. Specifically, it has been shown how to design auctions tailored to these priors that achieve a constant factor approximation ratio [2, 5]. In this paper, we present a priorindependent auction for this setting. This auction is guaranteed to achieve a constant fraction of the optimal expected profit for a large class of, so called, “regular ” distributions, without specific knowledge of the distributions. 1
Optimal Pricing Is Hard
, 2012
"... We show that computing the revenueoptimal deterministic auction in unitdemand singlebuyer Bayesian settings, i.e. the optimal itempricing, is computationally hard even in singleitem settings where the buyer’s value distribution is a sum of independently distributed attributes, or multiitem se ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We show that computing the revenueoptimal deterministic auction in unitdemand singlebuyer Bayesian settings, i.e. the optimal itempricing, is computationally hard even in singleitem settings where the buyer’s value distribution is a sum of independently distributed attributes, or multiitem settings where the buyer’s values for the items are independent. We also show that it is intractable to optimally price the grand bundle of multiple items for an additive bidder whose values for the items are independent. These difficulties stem from implicit definitions of a value distribution. We provide three instances of how different properties of implicit distributions can lead to intractability: the first is a #Phardness proof, while the remaining two are reductions from the SQRTSUM problem of Garey, Graham, and Johnson [14]. While simple pricing schemes can oftentimes approximate the best scheme in revenue, they can have drastically different underlying structure. We argue therefore that either the specification of the input distribution must be highly restricted in format, or it is necessary for the goal to be mere approximation to the optimal scheme’s revenue instead of computing properties of the scheme itself.
Mechanisms and Allocations with Positive Network Externalities
, 2012
"... With the advent of social networks such as Facebook and LinkedIn, and online offers/deals web sites, network externalties raise the possibility of marketing and advertising to users based on influence they derive from their neighbors in such networks. Indeed, a user’s knowledge of which of his neigh ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
With the advent of social networks such as Facebook and LinkedIn, and online offers/deals web sites, network externalties raise the possibility of marketing and advertising to users based on influence they derive from their neighbors in such networks. Indeed, a user’s knowledge of which of his neighbors “liked ” the product, changes his valuation for the product. Much of the work on the mechanism design under network externalities has addressed the setting when there is only one product. We consider a more natural setting when there are multiple competing products, and each node in the network is a unitdemand agent. We first consider the problem of welfare maximization under various different types of externality functions. Specifically we get a O(lognlog(nm)) approximation for concave externality functions, 2 O(d)approximation for convex externality functions that are bounded above by a polynomial of degree d, and we give a O(log 3 n)approximation when the externality function is submodular. Our techniques involve formulating nontrivial linear relaxations in each case, and developing novel rounding schemes that yield bounds vastly superior to those obtainable by directly applying results from combinatorial welfare maximization. We then consider the problem of Nash equilibrium where each node in the network is a player whose strategy space corresponds to selecting an item. We develop tight characterization of the conditions under which a Nash equilibrium exists in this game. Lastly, we consider the question of pricing and revenue optimization