Results 1  10
of
20
Bayesian Algorithmic Mechanism Design
, 2010
"... The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatib ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a BayesNash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in singleparameter agent settings, we give a general blackbox reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same1 approximation factor.
Bayesian combinatorial auctions: Expanding single buyer mechanisms to many buyers
 In FOCS. 512–521
"... • Bronze Medal, 13th International Olympiad in Informatics, Tampere, Finland, ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
• Bronze Medal, 13th International Olympiad in Informatics, Tampere, Finland,
When LP is the Cure for Your Matching Woes: Improved Bounds for Stochastic Matchings (Extended Abstract)
"... Abstract. Consider a random graph model where each possible edge e is present independently with some probability pe. We are given these numbers pe, and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Consider a random graph model where each possible edge e is present independently with some probability pe. We are given these numbers pe, and want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most ti times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LProunding based constantfactor approximation algorithms for these problems. Our main results are: • We give a 5.75approximation for weighted stochastic matching on general graphs, and a 5approximation on bipartite graphs. This answers an open question from [Chen et al. ICALP 09]. • Combining our LProunding algorithm with the natural greedy algorithm, we give an improved 3.88approximation for unweighted stochastic matching on general graphs and 3.51approximation on bipartite graphs. • We introduce a generalization of the stochastic online matching problem [Feldman et al. FOCS 09] that also models preferenceuncertainty and timeouts of buyers, and give a constant factor approximation algorithm. 1
Bayesian Optimal Auctions via Multi to Singleagent Reduction
, 1203
"... We study an abstract optimal auction problem for a single good or service. This problem includes environments where agents have budgets, risk preferences, or multidimensional preferences over several possible configurations of the good (furthermore, it allows an agent’s budget and risk preference t ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We study an abstract optimal auction problem for a single good or service. This problem includes environments where agents have budgets, risk preferences, or multidimensional preferences over several possible configurations of the good (furthermore, it allows an agent’s budget and risk preference to be known only privately to the agent). These are the main challenge areas for auction theory. A singleagent problem is to optimize a given objective subject to a constraint on the maximum probability with which each type is allocated, a.k.a., an allocation rule. Our approach is a reduction from multiagent mechanism design problem to collection of singleagent problems. We focus on maximizing revenue, but our results can be applied to other objectives (e.g., welfare). An optimal multiagent mechanism can be computed by a linear/convex program on interim allocation rules by simultaneously optimizing several singleagent mechanisms subject to joint feasibility of the allocation rules. For singleunit auctions, Border (1991) showed that the space of all jointly feasible interim allocation rules for n agents is a Ddimensional convex polytope which can be specified by 2D linear constraints, where D is the total number of all agents’
Bayesian Incentive Compatibility via Fractional Assignments
"... Very recently, Hartline and Lucier [14] studied singleparameter mechanism design problems in the Bayesian setting. They proposed a blackbox reduction that converted Bayesian approximation algorithms into BayesianIncentiveCompatible (BIC) mechanisms while preserving social welfare. It remains a ma ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Very recently, Hartline and Lucier [14] studied singleparameter mechanism design problems in the Bayesian setting. They proposed a blackbox reduction that converted Bayesian approximation algorithms into BayesianIncentiveCompatible (BIC) mechanisms while preserving social welfare. It remains a major open question if one can find similar reduction in the more important multiparameter setting. In this paper, we give positive answer to this question when the prior distribution has finite and small support. We propose a blackbox reduction for designing BIC multiparameter mechanisms. The reduction converts any algorithm into an ɛBIC mechanism with only marginal loss in social welfare. As a result, for combinatorial auctions with subadditive agents we get an ɛBIC mechanism that achieves constant approximation. 1
Priorindependent multiparameter mechanism design
 In Workshop on Internet and Network Economics (WINE
, 2011
"... Abstract. In a unitdemand multiunit multiitem auction, an auctioneer is selling a collection of different items to a set of agents each interested in buying at most unit. Each agent has a different private value for each of the items. We consider the problem of designing a truthful auction that m ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In a unitdemand multiunit multiitem auction, an auctioneer is selling a collection of different items to a set of agents each interested in buying at most unit. Each agent has a different private value for each of the items. We consider the problem of designing a truthful auction that maximizes the auctioneer’s profit in this setting. Previously, there has been progress on this problem in the setting in which each value is drawn from a known prior distribution. Specifically, it has been shown how to design auctions tailored to these priors that achieve a constant factor approximation ratio [2, 5]. In this paper, we present a priorindependent auction for this setting. This auction is guaranteed to achieve a constant fraction of the optimal expected profit for a large class of, so called, “regular ” distributions, without specific knowledge of the distributions. 1
Approximation Schemes for Sequential Posted Pricing in MultiUnit
, 2010
"... We design algorithms for computing approximately revenuemaximizing sequential postedpricing mechanisms (SPM) in Kunit auctions, in a standard Bayesian model. A seller has K copies of an item to sell, and there are n buyers, each interested in only one copy, who have some value for the item. The se ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We design algorithms for computing approximately revenuemaximizing sequential postedpricing mechanisms (SPM) in Kunit auctions, in a standard Bayesian model. A seller has K copies of an item to sell, and there are n buyers, each interested in only one copy, who have some value for the item. The seller must post a price for each buyer, the buyers arrive in a sequence enforced by the seller, and a buyer buys the item if its value exceeds the price posted to it. The seller does not know the values of the buyers, but have Bayesian information about them. An SPM specifies the ordering of buyers and the posted prices, and may be adaptive or nonadaptive in its behavior. The goal is to design SPM in polynomial time to maximize expected revenue. We compare against the expected revenue of optimal SPM, and provide a polynomial time approximation scheme (PTAS) for both nonadaptive and adaptive SPMs. This is achieved by two algorithms: an efficient algorithm that gives a (1 − 1 √)approximation (and hence a PTAS for sufficiently 2πK large K), and another that is a PTAS for constant K. The first algorithm yields a nonadaptive SPM that yields its approximation guarantees against an optimal adaptive SPM – this implies that the adaptivity gap in SPMs vanishes as K becomes larger. 1
The Complexity of Optimal Mechanism Design
, 1211
"... Myerson’s seminal work provides a computationally efficient revenueoptimal auction for selling one item to multiple bidders [17]. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly und ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Myerson’s seminal work provides a computationally efficient revenueoptimal auction for selling one item to multiple bidders [17]. Generalizing this work to selling multiple items at once has been a central question in economics and algorithmic game theory, but its complexity has remained poorly understood. We answer this question by showing that a revenueoptimal auction in multiitem settings cannot be found and implemented computationally efficiently, unless ZPP ⊇ P #P. This is true even for a single additive bidder whose values for the items are independently distributedon tworationalnumberswith rationalprobabilities. Ourresult isvery general: we show that it is hard to compute any encoding of an optimal auction of any format (direct or indirect, truthful or nontruthful) that can be implemented in expected polynomial time. In particular, under wellbelieved complexitytheoreticassumptions, revenueoptimization in very simple multiitem settings can only be tractably approximated. We note that our hardness result applies to randomized mechanisms in a very simple setting, and is not an artifact of introducing combinatorial structure to the problem by allowing correlation among item values, introducing combinatorial valuations, or requiring the mechanism to be deterministic (whose structure is readily combinatorial). Our proof is enabled by a flowinterpretation of the solutions of an exponentialsize linear program for revenue maximization with an additional supermodularity constraint.
A stochastic probing problem with applications
 In Proc. of 16th IPCO. Forthcoming
, 2013
"... Abstract. We study a general stochastic probing problem defined on a universe V, where each element e ∈ V is “active ” independently with probability pe. Elementshaveweights{we: e ∈ V} and the goal is to maximize the weight of a chosen subset S of active elements. However, we are given only the pe v ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study a general stochastic probing problem defined on a universe V, where each element e ∈ V is “active ” independently with probability pe. Elementshaveweights{we: e ∈ V} and the goal is to maximize the weight of a chosen subset S of active elements. However, we are given only the pe values—to determine whether or not an element e is active, our algorithm must probe e. Ifelementeis probed and happens to be active, then e must irrevocably be added to the chosen set S; ifeis not active then it is not included in S. Moreover, the following conditions must hold in every random instantiation: – the set Q of probed elements satisfy an “outer ” packing constraint, – the set S of chosen elements satisfy an “inner ” packing constraint. The kinds of packing constraints we consider are intersections of matroids and knapsacks. Our results provide a simple and unified view of results in stochastic matching [1, 2] and Bayesian mechanism design [3], and can also handle more general constraints. As an application, we obtain the first polynomialtime Ω(1/k)approximate “Sequential Posted Price Mechanism ” under kmatroid intersection feasibility constraints, improving on prior work [3–5]. 1
Bayesian Mechanism Design for BudgetConstrained Agents
, 2011
"... We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent’s utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This d ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent’s utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This discontinuity in the utility function presents a significant challenge in the design of good mechanisms, and classical “unconstrained ” mechanisms fail to work in settings with budgets. The goal of this paper is to develop general reductions from budgetconstrained Bayesian MD to unconstrained Bayesian MD with small loss in performance. We consider this question in the context of the two most wellstudied objectives in mechanism design—social welfare and revenue—and present constant factor approximations in a number of settings. Some of our results extend to settings where budgets are private and agents need to be incentivized to reveal them truthfully.