Results 1  10
of
166
Graphical Models for Game Theory
, 2001
"... We introduce a compact graphtheoretic representation for multiparty game theory. Our main result is a provably correct and efficient algorithm for computing approximate Nash equilibria in onestage games represented by trees or sparse graphs. ..."
Abstract

Cited by 228 (21 self)
 Add to MetaCart
We introduce a compact graphtheoretic representation for multiparty game theory. Our main result is a provably correct and efficient algorithm for computing approximate Nash equilibria in onestage games represented by trees or sparse graphs.
Complexity Results about Nash Equilibria
, 2002
"... Noncooperative game theory provides a normative framework for analyzing strategic interactions. ..."
Abstract

Cited by 132 (10 self)
 Add to MetaCart
(Show Context)
Noncooperative game theory provides a normative framework for analyzing strategic interactions.
Complexity of Mechanism Design
, 2002
"... The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfull ..."
Abstract

Cited by 131 (25 self)
 Add to MetaCart
The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully and a (socially) desirable outcome is chosen. We propose an approach where a mechanism is automatically created for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Focusing on settings where side payments are not possible, we show that the mechanism design problem is NPcomplete for deterministic mechanisms. This holds both for dominantstrategy implementation and for BayesNash implementation. We then show that if we allow randomized mechanisms, the mechanism design problem becomes tractable. In other words, the coordinator can tackle the computational complexity introduced by its uncertainty about the agents' preferences by making the agents face additional uncertainty. This comes at no loss, and in some cases at a gain, in the (social) objective.
Nash QLearning for GeneralSum Stochastic Games
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... We extend Qlearning to a noncooperative multiagent context, using the framework of generalsum stochastic games. A learning agent maintains Qfunctions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Qvalues. This learning protocol provably conv ..."
Abstract

Cited by 116 (0 self)
 Add to MetaCart
We extend Qlearning to a noncooperative multiagent context, using the framework of generalsum stochastic games. A learning agent maintains Qfunctions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Qvalues. This learning protocol provably converges given certain restrictions on the stage games (defined by Qvalues) that arise during learning. Experiments with a pair of twoplayer grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Qfunction, but sometimes fails to converge in the second, which has three different equilibrium Qfunctions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Qlearning than with a singleagent Qlearning method. When at least one agent adopts Nash Qlearning, the performance of both agents is better than using singleagent Qlearning. We have also implemented an online version of Nash Qlearning that balances exploration with exploitation, yielding improved performance.
A framework for sequential planning in multiagent settings
 Journal of Artificial Intelligence Research
, 2005
"... This paper extends the framework of partially observable Markov decision processes (POMDPs) to multiagent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian ..."
Abstract

Cited by 95 (26 self)
 Add to MetaCart
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multiagent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian update to maintain their beliefs over time. The solutions map belief states to actions. Models of other agents may include their belief states and are related to agent types considered in games of incomplete information. We express the agents ’ autonomy by postulating that their models are not directly manipulable or observable by other agents. We show that important properties of POMDPs, such as convergence of value iteration, the rate of convergence, and piecewise linearity and convexity of the value functions carry over to our framework. Our approach complements a more traditional approach to interactive settings which uses Nash equilibria as a solution paradigm. We seek to avoid some of the drawbacks of equilibria which may be nonunique and are not able to capture offequilibrium behaviors. We do so at the cost of having to represent, process and continually revise models of other agents. Since the agent’s beliefs may be arbitrarily nested the optimal solutions to decision making problems are only asymptotically computable. However, approximate belief updates and approximately optimal plans are computable. We illustrate our framework using a simple application domain, and we show examples of belief updates and value functions. 1.
Computing Optimal Randomized Resource Allocations for Massive Security Games
 In AAMAS09
, 2009
"... Predictable allocations of security resources such as police officers, canine units, or checkpoints are vulnerable to exploitation by attackers. Recent work has applied gametheoretic methods to find optimal randomized security policies, including a fielded application at the Los Angeles Internation ..."
Abstract

Cited by 79 (43 self)
 Add to MetaCart
(Show Context)
Predictable allocations of security resources such as police officers, canine units, or checkpoints are vulnerable to exploitation by attackers. Recent work has applied gametheoretic methods to find optimal randomized security policies, including a fielded application at the Los Angeles International Airport (LAX). This approach has promising applications in many similar domains, including police patrolling for subway and bus systems, randomized baggage screening, and scheduling for the Federal Air Marshal Service (FAMS) on commercial flights. However, the existing methods scale poorly when the security policy requires coordination of many resources, which is central to many of these potential applications. We develop new models and algorithms that scale to much more complex instances of security games. The key idea is to use a compact model of security games, which allows exponential improvements in both memory and runtime relative to the best known algorithms for solving general Stackelberg games. We develop even faster algorithms for security games under payoff restrictions that are natural in many security domains. Finally, introduce additional realistic scheduling constraints while retaining comparable performance improvements. The empirical evaluation comprises both random data and realistic instances of the FAMS and LAX problems. Our new methods scale to problems several orders of magnitude larger than the fastest known algorithm.
Pure Nash Equilibria: Hard and Easy Games
"... In this paper we investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NPhard, while deciding whether a game has a strong Nash equilibrium is Stcomplete. We then s ..."
Abstract

Cited by 70 (3 self)
 Add to MetaCart
(Show Context)
In this paper we investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NPhard, while deciding whether a game has a strong Nash equilibrium is Stcomplete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each player's move depends on moves of other players. We say that a game has small neighborhood if the &quot; utility function for each player depends only on (the actions of) a logarithmically small number of other players, The dependency structure of a game G can he expressed by a graph G(G) or by a hypergraph II(G). Among other results, we show that if jC has small neighborhood and if II(G) has botmdecl hypertree width (or if G(G) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFLcomplete and thus in the class _NC ~ of highly parallelizable problems. 1 Introduction and Overview of Results The theory of strategic games and Nash equilibria has important applications in economics and decision making [31, 2]. Determining whether Nash equilibria exist, and effectively computing
Context specific multiagent coordination and planning with factored MDPs
 In AAAI
, 2002
"... We present an algorithm for coordinated decision making in cooperative multiagent settings, where the agents ’ value function can be represented as a sum of contextspecific value rules. The task of finding an optimal joint action in this setting leads to an algorithm where the coordination structur ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
We present an algorithm for coordinated decision making in cooperative multiagent settings, where the agents ’ value function can be represented as a sum of contextspecific value rules. The task of finding an optimal joint action in this setting leads to an algorithm where the coordination structure between agents depends on the current state of the system and even on the actual numerical values assigned to the value rules. We apply this framework to the task of multiagent planning in dynamic systems, showing how a joint value function of the associated Markov Decision Process can be approximated as a set of value rules using an efficient linear programming algorithm. The agents then apply the coordination graph algorithm at each iteration of the process to decide on the highestvalue joint action, potentially leading to a different coordination pattern at each step of the plan. 1
Computing Equilibria in MultiPlayer Games
 In Proceedings of the Annual ACMSIAM Symposium on Discrete Algorithms (SODA
, 2004
"... We initiate the systematic study of algorithmic issues involved in finding equilibria (Nash and correlated) in games with a large number of players; such games, in order to be computationally meaningful, must be presented in some succinct, gamespecific way. We develop a general framework for obta ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
(Show Context)
We initiate the systematic study of algorithmic issues involved in finding equilibria (Nash and correlated) in games with a large number of players; such games, in order to be computationally meaningful, must be presented in some succinct, gamespecific way. We develop a general framework for obtaining polynomialtime algorithms for optimizing over correlated equilibria in such settings, and show how it can be applied successfully to symmetric games (for which we actually find an exact polytopal characterization), graphical games, and congestion games, among others. We also present complexity results implying that such algorithms are not possible in certain other such games. Finally, we present a polynomialtime algorithm, based on quantifier elimination, for finding a Nash equilibrium in symmetric games when the number of strategies is relatively small.
Computing Nash Equilibria of ActionGraph Games
 IN PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI
, 2004
"... Actiongraph games (AGGs) are a fully expressive game representation which can compactly express both strict and contextspecific independence between players' utility functions. Actions are represented as nodes in a graph G, and the payoff to an agent who chose the action s depends only ..."
Abstract

Cited by 51 (9 self)
 Add to MetaCart
Actiongraph games (AGGs) are a fully expressive game representation which can compactly express both strict and contextspecific independence between players' utility functions. Actions are represented as nodes in a graph G, and the payoff to an agent who chose the action s depends only on the numbers of other agents who chose actions connected to s. We present algorithms for computing both symmetric and arbitrary equilibria of AGGs using a continuation method. We analyze the worstcase cost of computing the Jacobian of the payoff function, the exponentialtime bottleneck step, and in all cases achieve exponential speedup. When the indegree of G is bounded by a constant and the game is symmetric, the Jacobian can be computed in polynomial time.