Results 1  10
of
320
Intrinsic Robustness of the Price of Anarchy
"... The price of anarchy (POA) is a worstcase measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium ..."
Abstract

Cited by 99 (11 self)
 Add to MetaCart
(Show Context)
The price of anarchy (POA) is a worstcase measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regretminimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a “canonical sufficient condition ” for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regretminimizing players (or “price of total anarchy”). Smoothness arguments also have automatic implications for the inefficiency of approximate and BayesianNash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomiallength bestresponse sequences. We also identify classes of games — most notably, congestion games with cost functions restricted to an arbitrary fixed set — that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worstcase upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with nonpolynomial cost functions, and the first
Computing correlated equilibria in MultiPlayer Games
 STOC'05
, 2005
"... We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, ..."
Abstract

Cited by 95 (6 self)
 Add to MetaCart
We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations. Our algorithm is based on a variant of the existence proof due to Hart and Schmeidler [11], and employs linear programming duality, the ellipsoid algorithm, Markov chain steady state computations, as well as applicationspecific methods for computing multivariate expectations.
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 88 (5 self)
 Add to MetaCart
(Show Context)
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.
Decentralized charging control for large populations of plugin electric vehicles,” Submitted to
 IEEE Transactions on Control Systems Technology
"... Abstract—The paper develops a novel decentralized charging control strategy for large populations of plugin electric vehicles (PEVs). We consider the situation where PEV agents are rational and weakly coupled via their operation costs. At an established Nash equilibrium, each of the PEV agents reac ..."
Abstract

Cited by 87 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The paper develops a novel decentralized charging control strategy for large populations of plugin electric vehicles (PEVs). We consider the situation where PEV agents are rational and weakly coupled via their operation costs. At an established Nash equilibrium, each of the PEV agents reacts optimally with respect to the average charging strategy of all the PEV agents. Each of the average charging strategies can be approximated by an infinite population limit which is the solution of a fixed point problem. The control objective is to minimize electricity generation costs by establishing a PEV charging schedule that fills the overnight demand valley. The paper shows that under certain mild conditions, there exists a unique Nash equilibrium that almost satisfies that goal. Moreover, the paper establishes a sufficient condition under which the system converges to the unique Nash equilibrium. The theoretical results are illustrated through various numerical examples.
Computing Nash equilibria: Approximation and smoothed complexity
 In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS
, 2006
"... By proving that the problem of computing a 1/n Θ(1)approximate Nash equilibrium remains PPADcomplete, we show that the BIMATRIX game is not likely to have a fully polynomialtime approximation scheme. In other words, no algorithm with time polynomial in n and 1/ǫ can compute an ǫapproximate Nash ..."
Abstract

Cited by 86 (11 self)
 Add to MetaCart
(Show Context)
By proving that the problem of computing a 1/n Θ(1)approximate Nash equilibrium remains PPADcomplete, we show that the BIMATRIX game is not likely to have a fully polynomialtime approximation scheme. In other words, no algorithm with time polynomial in n and 1/ǫ can compute an ǫapproximate Nash equilibrium of an n×n bimatrix game, unless PPAD ⊆ P. Instrumental to our proof, we introduce a new discrete fixedpoint problem on a highdimensional cube with a constant sidelength, such as on an ndimensional cube with sidelength 7, and show that they are PPADcomplete. Furthermore, we prove that it is unlikely, unless PPAD ⊆ RP, that the smoothed complexity of the LemkeHowson algorithm or any algorithm for computing a Nash equilibrium of a bimatrix game is polynomial in n and 1/σ under perturbations with magnitude σ. Our result answers a major open question in the smoothed analysis of algorithms and the approximation of Nash equilibria.
A deterministic subexponential algorithm for solving parity games
 SODA
, 2006
"... The existence of polynomial time algorithms for the solution of parity games is a major open problem. The fastest known algorithms for the problem are randomized algorithms that run in subexponential time. These algorithms are all ultimately based on the randomized subexponential simplex algorithms ..."
Abstract

Cited by 82 (3 self)
 Add to MetaCart
The existence of polynomial time algorithms for the solution of parity games is a major open problem. The fastest known algorithms for the problem are randomized algorithms that run in subexponential time. These algorithms are all ultimately based on the randomized subexponential simplex algorithms of Kalai and of Matousek, Sharir and Welzl. Randomness seems to play an essential role in these algorithms. We use a completely different, and elementary, approach to obtain a deterministic subexponential algorithm for the solution of parity games. The new algorithm, like the existing randomized subexponential algorithms, uses only polynomial space, and it is almost as fast as the randomized subexponential algorithms mentioned above.
Convergence to Approximate Nash Equilibria in Congestion Games
 In SODA ’07
, 2007
"... ..."
(Show Context)
Complexity of computing optimal Stackelberg strategies in security resource allocation games
 In Proceedings of the National Conference on Artificial Intelligence (AAAI
, 2010
"... Recently, algorithms for computing gametheoretic solutions have been deployed in realworld security applications, such as the placement of checkpoints and canine units at Los Angeles International Airport. These algorithms assume that the defender (security personnel) can commit to a mixed strateg ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
(Show Context)
Recently, algorithms for computing gametheoretic solutions have been deployed in realworld security applications, such as the placement of checkpoints and canine units at Los Angeles International Airport. These algorithms assume that the defender (security personnel) can commit to a mixed strategy, a socalled Stackelberg model. As pointed out by Kiekintveld et al. (Kiekintveld et al. 2009), in these applications, generally, multiple resources need to be assigned to multiple targets, resulting in an exponential number of pure strategies for the defender. In this paper, we study how to compute optimal Stackelberg strategies in such games, showing that this can be done in polynomial time in some cases, and is NPhard in others.
On the Complexity of Nash Equilibria and Other Fixed Points (Extended Abstract)
 IN PROC. FOCS
, 2007
"... We reexamine what it means to compute Nash equilibria and, more generally, what it means to compute a fixed point of a given Brouwer function, and we investigate the complexity of the associated problems. Specifically, we study the complexity of the following problem: given a finite game, Γ, with 3 ..."
Abstract

Cited by 68 (8 self)
 Add to MetaCart
(Show Context)
We reexamine what it means to compute Nash equilibria and, more generally, what it means to compute a fixed point of a given Brouwer function, and we investigate the complexity of the associated problems. Specifically, we study the complexity of the following problem: given a finite game, Γ, with 3 or more players, and given ɛ> 0, compute an approximation within ɛ of some (actual) Nash equilibrium. We show that approximation of an actual Nash Equilibrium, even to within any nontrivial constant additive factor ɛ < 1/2 in just one desired coordinate, is at least as hard as the long standing squareroot sum problem, as well as a more general arithmetic circuit decision problem that characterizes Ptime in a unitcost model of computation with arbitrary precision rational arithmetic; thus placing the approximation problem in P, or even NP, would resolve major open problems in the complexity of numerical computation. We show similar results for market equilibria: it is hard to estimate with any nontrivial accuracy the equilibrium prices in an exchange economy with a unique equilibrium, where the economy is given by explicit algebraic formulas for the excess demand functions. We define a class, FIXP, which captures search problems that can be cast as fixed point
Computing nash equilibria for scheduling on restricted parallel links
 In Proceedings of the 36th Annual ACM Symposium on the Thoery of Computing (STOC’04
, 2004
"... We consider the problem of routing n users on m parallel links under the restriction that each user may only be routed on a link from a certain set of allowed links for the user. So, this problem is equivalent to the correspondingly restricted scheduling problem of assigning n jobs to m parallel ma ..."
Abstract

Cited by 61 (12 self)
 Add to MetaCart
We consider the problem of routing n users on m parallel links under the restriction that each user may only be routed on a link from a certain set of allowed links for the user. So, this problem is equivalent to the correspondingly restricted scheduling problem of assigning n jobs to m parallel machines. In a Nash equilibrium, no user may improve its own Individual Cost (latency) by unilaterally switching to another link from its set of allowed links. For identical links, we present, as our main result, a polynomial time algorithm to compute from any given assignment a Nash equilibrium with nonincreased makespan. The algorithm gradually transforms the assignment by pushing the unsplittable user traffics through a flow network, which is constructed from the users and the links. The algorithm uses ideas from blocking flows. Furthermore, we use techniques simular to those in the generic PREFLOWPUSH algorithm to approximate in polynomial time a schedule with optimum makespan. This results to an improved approximation factor of 2 − 1w1 for identical links, where w1 is the largest user traffic, and to an approximation factor of 2 for related links. 2