Results 11  20
of
124
Market Sharing Games Applied to Content Distribution in AdHoc Networks
 MOBIHOC'04
, 2004
"... ..."
Competition and Efficiency in Congested Markets
"... We study the efficiency of oligopoly equilibria in congested markets. The motivating examples are the allocation of network flows in a communication network or of traffic in a transportation network. We show that increasing competition among oligopolists can reduce efficiency, measured as the differ ..."
Abstract

Cited by 65 (9 self)
 Add to MetaCart
We study the efficiency of oligopoly equilibria in congested markets. The motivating examples are the allocation of network flows in a communication network or of traffic in a transportation network. We show that increasing competition among oligopolists can reduce efficiency, measured as the difference between users ’ willingness to pay and delay costs. We characterize a tight bound of 5/6 on efficiency in pure strategy equilibria when there is zero latency at zero flow and a tight bound of 2 √ 2 − 2 with positive latency at zero flow. These bounds are tight even when the numbers of routes and oligopolists are arbitrarily large.
Tight approximation algorithms for maximum general assignment problems
 Proc. of ACMSIAM SODA
, 2006
"... A separable assignment problem (SAP) is defined by a set of bins and a set of items to pack in each bin; a value, fij, for assigning item j to bin i; and a separate packing constraint for each bin – i.e. for bin i, a family Ii of subsets of items that fit in bin i. The goal is to pack items into bin ..."
Abstract

Cited by 63 (7 self)
 Add to MetaCart
(Show Context)
A separable assignment problem (SAP) is defined by a set of bins and a set of items to pack in each bin; a value, fij, for assigning item j to bin i; and a separate packing constraint for each bin – i.e. for bin i, a family Ii of subsets of items that fit in bin i. The goal is to pack items into bins to maximize the aggregate value. This class of problems includes the maximum generalized assignment problem (GAP) 1) and a distributed caching problem (DCP) described in this paper. Given a βapproximation algorithm for finding the highest value packing of a single bin, we give 1. A polynomialtime LProunding based ((1 − 1 e)β)approximation algorithm. 2. A simple polynomialtime local search ( β approximation algorithm, for any ɛ> 0. β+1 − ɛ)Therefore, for all examples of SAP that admit an approximation scheme for the singlebin problem, we obtain an LPbased algorithm with (1 − 1 e − ɛ)approximation and a local search algorithm with ( 1 2 −ɛ)approximation guarantee. Furthermore, for cases in which the subproblem admits a fully polynomial approximation scheme (such as for GAP), the LPbased algorithm analysis can be strengthened to give a guarantee of 1 − 1 e. The best previously known approximation algorithm for GAP is a 1 2approximation by Shmoys and Tardos; and Chekuri and Khanna. Our LP algorithm is based on rounding a new linear programming relaxation, with a provably better integrality gap. To complement these results, we show that SAP and DCP cannot be approximated within a factor better than 1 − 1 e unless NP ⊆ DTIME(n O(log log n)), even if there exists a polynomialtime exact algorithm for the singlebin problem.
Selfish Caching in Distributed Systems: A GameTheoretic Analysis
 in Proc. ACM Symposium on Principles of Distributed Computing (ACM PODC
, 2004
"... We analyze replication of resources by server nodes that act selfishly, using a gametheoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nas ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
(Show Context)
We analyze replication of resources by server nodes that act selfishly, using a gametheoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.
Regret minimization and the price of total anarchy
, 2008
"... We propose weakening the assumption made when studying the price of anarchy: Rather than assume that selfinterested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret ..."
Abstract

Cited by 59 (10 self)
 Add to MetaCart
(Show Context)
We propose weakening the assumption made when studying the price of anarchy: Rather than assume that selfinterested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this “price of total anarchy” matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking [15], which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies.
Computing Equilibria in MultiPlayer Games
 In Proceedings of the Annual ACMSIAM Symposium on Discrete Algorithms (SODA
, 2004
"... We initiate the systematic study of algorithmic issues involved in finding equilibria (Nash and correlated) in games with a large number of players; such games, in order to be computationally meaningful, must be presented in some succinct, gamespecific way. We develop a general framework for obta ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
(Show Context)
We initiate the systematic study of algorithmic issues involved in finding equilibria (Nash and correlated) in games with a large number of players; such games, in order to be computationally meaningful, must be presented in some succinct, gamespecific way. We develop a general framework for obtaining polynomialtime algorithms for optimizing over correlated equilibria in such settings, and show how it can be applied successfully to symmetric games (for which we actually find an exact polytopal characterization), graphical games, and congestion games, among others. We also present complexity results implying that such algorithms are not possible in certain other such games. Finally, we present a polynomialtime algorithm, based on quantifier elimination, for finding a Nash equilibrium in symmetric games when the number of strategies is relatively small.
A network pricing game for selfish traffic
 in Proc. of SIGACTSIGOPS Symposium on Principles of Distributed Computing (PODC
, 2005
"... The success of the Internet is remarkable in light of the decentralized manner in which it is designed and operated. Unlike small scale networks, the Internet is built and controlled by a large number of disperate service providers who are not interested in any global optimization. Instead, provider ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
The success of the Internet is remarkable in light of the decentralized manner in which it is designed and operated. Unlike small scale networks, the Internet is built and controlled by a large number of disperate service providers who are not interested in any global optimization. Instead, providers simply seek to maximize their own profit by charging users for access to their service. Users themselves also behave selfishly, optimizing over price and quality of service. Game theory provides a natural framework for the study of such a situation. However, recent work in this area tends to focus on either the service providers or the network users, but not both. This paper introduces a new model for exploring the interaction of these two elements, in which network managers compete for users via prices and the quality of service provided. We study the extent to which competition between service providers hurts the overall social utility of the system.
Bayesian combinatorial auctions
 Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), I
, 2008
"... Abstract. We study the following Bayesian setting: m items are sold to n sel¯sh bidders in m independent secondprice auctions. Each bidder has a private valuation function that expresses complex preferences over all subsets of items. Bidders only have beliefs about the valuation functions of the o ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study the following Bayesian setting: m items are sold to n sel¯sh bidders in m independent secondprice auctions. Each bidder has a private valuation function that expresses complex preferences over all subsets of items. Bidders only have beliefs about the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular valuation functions, then every Bayesian Nash equilibrium of the resulting game provides a 2approximation to the optimal social welfare. Moreover, we show that in the fullinformation game a pure Nash always exists and can be found in time that is polynomial in both m and n. 1
Revisiting LogLinear Learning: Asynchrony, Completeness and PayoffBased Implementation
, 2008
"... Loglinear learning is a learning algorithm with equilibrium selection properties. Loglinear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of loglinear learning has centered around ..."
Abstract

Cited by 42 (11 self)
 Add to MetaCart
Loglinear learning is a learning algorithm with equilibrium selection properties. Loglinear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of loglinear learning has centered around explicitly computing the stationary distribution. This analysis relied on a highly structured setting: i) players ’ utility functions constitute a potential game, ii) players update their strategies one at a time, which we refer to as asynchrony, iii) at any stage, a player can select any action in the action set, which we refer to as completeness, and iv) each player is endowed with the ability to assess the utility he would have received for any alternative action provided that the actions of all other players remain fixed. Since the appeal of loglinear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are the stochastically stable action profiles. In this paper, we introduce slight variants of loglinear learning to include both synchronous updates and incomplete action sets. In both settings, we prove that only potential function maximizers are stochastically stable. Furthermore, we introduce a payoffbased version of loglinear learning, in which players are only aware of the utility they received and the action that they played. Note that loglinear learning in its original form is not a payoffbased learning algorithm. In payoffbased loglinear learning, we also prove that only potential maximizers are stochastically stable. The key enabler for these results is to change the focus of the analysis away from deriving the explicit form of the stationary distribution of the learning process towards characterizing the stochastically stable states. The resulting analysis uses the theory of resistance trees for regular perturbed Markov decision processes, thereby allowing a relaxation of the aforementioned structural assumptions.
Online stochastic packing applied to display ad allocation.
 In Proceedings of the 18th Annual European Conference on Algorithms: Part I, ESA’10,
, 2010
"... Abstract. Inspired by online ad allocation, we study online stochastic packing integer programs from theoretical and practical standpoints. We first present a nearoptimal online algorithm for a general class of packing integer programs which model various online resource allocation problems includ ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Inspired by online ad allocation, we study online stochastic packing integer programs from theoretical and practical standpoints. We first present a nearoptimal online algorithm for a general class of packing integer programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple dual trainingbased algorithm achieves a (1−o(1))approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constantfactor approximations for the adversarial variants of the same problems (e.g. factor 1 − 1 e for online ad allocation, and log(m) for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various trainingbased and online allocation algorithms on data sets collected from reallife display ad allocation system. Our experimental evaluation confirms the effectiveness of trainingbased algorithms on real data sets, and also indicates an intrinsic tradeoff between fairness and efficiency.