Results 1  10
of
135
The complexity of computing a Nash equilibrium
, 2006
"... We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recentlyestablished equivalence between polynomialtime solvability of n ..."
Abstract

Cited by 287 (16 self)
 Add to MetaCart
(Show Context)
We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recentlyestablished equivalence between polynomialtime solvability of normalform games and graphical games, and shows that these kinds of games can implement arbitrary members of a PPADcomplete class of Brouwer functions. 1
The Complexity of Pure Nash Equilibria
, 2004
"... We investigate from the computational viewpoint multiplayer games that are guaranteed to have pure Nash equilibria. We focus on congestion games, and show that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLScomplete in general. ..."
Abstract

Cited by 158 (6 self)
 Add to MetaCart
(Show Context)
We investigate from the computational viewpoint multiplayer games that are guaranteed to have pure Nash equilibria. We focus on congestion games, and show that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLScomplete in general. We discuss implications to nonatomic congestion games, and we explore the scope of the potential function method for proving existence of pure Nash equilibria.
Computing the optimal strategy to commit to
 IN PROCEEDINGS OF THE 7TH ACM CONFERENCE ON ELECTRONIC COMMERCE (ACMEC
, 2006
"... In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models a ..."
Abstract

Cited by 135 (22 self)
 Add to MetaCart
(Show Context)
In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously. The recent surge in interest in computing gametheoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position). In this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normalform and Bayesian games. We give both positive results (efficient algorithms) and negative results (NPhardness results).
A.: Playing large games using simple strategies
 In: Proc. of the 4th ACM Conf. on El. Commerce (EC ’03). Assoc. of Comp. Mach
, 2003
"... We prove the existence of Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payos to all players in any (exact) Nash equilibrium can be approximated by the payos to the players in some such logarithmic support Nash equilibrium. These str ..."
Abstract

Cited by 113 (4 self)
 Add to MetaCart
(Show Context)
We prove the existence of Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payos to all players in any (exact) Nash equilibrium can be approximated by the payos to the players in some such logarithmic support Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasipolynomial algorithm for computing an Nash equilibrium. To our knowledge this is the rst subexponential algorithm for nding an Nash equilibrium. Our results hold for any multipleplayer game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a xed number of players m, the payos to all players in any mtuple of mixed strategies can be approximated by the payos in some mtuple of constant support strategies. We also prove that if the payo matrices of a two person game have low rank then the game has an exact Nash equilibrium with small support. This implies that if the payo matrices can be well approximated by low rank matrices, the game has an equilibrium with small support. It also implies that if the payo matrices have constant rank we can compute an exact Nash equilibrium in polynomial time.
AWESOME: A general multiagent learning algorithm that converges in selfplay and learns a best response against stationary opponents
, 2003
"... A satisfactory multiagent learning algorithm should, at a minimum, learn to play optimally against stationary opponents and converge to a Nash equilibrium in selfplay. The algorithm that has come closest, WoLFIGA, has been proven to have these two properties in 2player 2action repeated games— as ..."
Abstract

Cited by 100 (5 self)
 Add to MetaCart
(Show Context)
A satisfactory multiagent learning algorithm should, at a minimum, learn to play optimally against stationary opponents and converge to a Nash equilibrium in selfplay. The algorithm that has come closest, WoLFIGA, has been proven to have these two properties in 2player 2action repeated games— assuming that the opponent’s (mixed) strategy is observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have these two properties in all repeated (finite) games. It requires only that the other players ’ actual actions (not their strategies) can be observed at each step. It also learns to play optimally against opponents that eventually become stationary. The basic idea behind AWESOME (Adapt When Everybody is Stationary, Otherwise Move to Equilibrium) is to try to adapt to the others’ strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing other multiagent learning algorithms also.
Run the GAMUT: A comprehensive approach to evaluating gametheoretic algorithms
 In AAMAS04
, 2004
"... We present GAMUT 1, a suite of game generators designed for testing gametheoretic algorithms. We explain why such a generator is necessary, offer a way of visualizing relationships between the sets of games supported by GAMUT, and give an overview of GAMUT’s architecture. We highlight the importanc ..."
Abstract

Cited by 86 (10 self)
 Add to MetaCart
(Show Context)
We present GAMUT 1, a suite of game generators designed for testing gametheoretic algorithms. We explain why such a generator is necessary, offer a way of visualizing relationships between the sets of games supported by GAMUT, and give an overview of GAMUT’s architecture. We highlight the importance of using comprehensive test data by benchmarking existing algorithms. We show surprisingly large variation in algorithm performance across different sets of games for two widelystudied problems: computing Nash equilibria and multiagent learning in repeated games. 2 1.
Pure Nash Equilibria: Hard and Easy Games
"... In this paper we investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NPhard, while deciding whether a game has a strong Nash equilibrium is Stcomplete. We then s ..."
Abstract

Cited by 82 (4 self)
 Add to MetaCart
(Show Context)
In this paper we investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NPhard, while deciding whether a game has a strong Nash equilibrium is Stcomplete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each player's move depends on moves of other players. We say that a game has small neighborhood if the &quot; utility function for each player depends only on (the actions of) a logarithmically small number of other players, The dependency structure of a game G can he expressed by a graph G(G) or by a hypergraph II(G). Among other results, we show that if jC has small neighborhood and if II(G) has botmdecl hypertree width (or if G(G) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFLcomplete and thus in the class _NC ~ of highly parallelizable problems. 1 Introduction and Overview of Results The theory of strategic games and Nash equilibria has important applications in economics and decision making [31, 2]. Determining whether Nash equilibria exist, and effectively computing
A Polynomialtime Nash Equilibrium Algorithm for Repeated Games
 Proceedings of the ACM Conference on Electronic Commerce (ACMEC
, 2004
"... With the increasing reliance on game theory as a foundation for auctions and electronic commerce, ecient algorithms for computing equilibria in multiplayer generalsum games are of great theoretical and practical interest. The computational complexity of nding a Nash equilibrium for a oneshot bima ..."
Abstract

Cited by 72 (5 self)
 Add to MetaCart
(Show Context)
With the increasing reliance on game theory as a foundation for auctions and electronic commerce, ecient algorithms for computing equilibria in multiplayer generalsum games are of great theoretical and practical interest. The computational complexity of nding a Nash equilibrium for a oneshot bimatrix game is a well known open problem. This paper treats a related but distinct problem, that of nding a Nash equilibrium for an averagepayo repeated bimatrix game, and presents a polynomialtime algorithm. Our approach draws on the well known \folk theorem" from game theory and shows how nitestate equilibrium strategies can be found eciently and expressed succinctly.
Mixedinteger programming methods for finding Nash equilibria
 IN PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI
, 2005
"... We present, to our knowledge, the first mixed integer program (MIP) formulations for finding Nash equilibria in games (specifically, twoplayer normal form games). We study different design dimensions of search algorithms that are based on those formulations. Our MIP Nash algorithm outperforms Lemke ..."
Abstract

Cited by 69 (22 self)
 Add to MetaCart
(Show Context)
We present, to our knowledge, the first mixed integer program (MIP) formulations for finding Nash equilibria in games (specifically, twoplayer normal form games). We study different design dimensions of search algorithms that are based on those formulations. Our MIP Nash algorithm outperforms LemkeHowson but not PorterNudelmanShoham (PNS) on GAMUT data. We argue why experiments should also be conducted on games with equilibria with mediumsized supports only, and present a methodology for generating such games. On such games MIP Nash drastically outperforms PNS but not LemkeHowson. Certain MIP Nash formulations also yield anytime algorithms for ɛequilibrium, with provable bounds. Another advantage of MIP Nash is that it can be used to find an optimal equilibrium (according to various objectives). The prior algorithms can be extended to that setting, but they are orders of magnitude slower.
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
(Show Context)
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.