Results 1  10
of
39
The complexity of computing a Nash equilibrium
, 2006
"... We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recentlyestablished equivalence between polynomialtime solvability of n ..."
Abstract

Cited by 238 (16 self)
 Add to MetaCart
We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recentlyestablished equivalence between polynomialtime solvability of normalform games and graphical games, and shows that these kinds of games can implement arbitrary members of a PPADcomplete class of Brouwer functions. 1
Recursive Markov chains, stochastic grammars, and monotone systems of nonlinear equations
 IN STACS
, 2005
"... We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer ..."
Abstract

Cited by 72 (11 self)
 Add to MetaCart
We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer a natural abstract model for probabilistic programs with procedures. They generalize, in a precise sense, a number of well studied stochastic models, including Stochastic ContextFree Grammars (SCFG) and MultiType Branching Processes (MTBP). We focus on algorithms for reachability and termination analysis for RMCs: what is the probability that an RMC started from a given state reaches another target state, or that it terminates? These probabilities are in general irrational, and they arise as (least) fixed point solutions to certain (monotone) systems of nonlinear equations associated with RMCs. We address both the qualitative problem of determining whether the probabilities are 0, 1 or inbetween, and
On the complexity of numerical analysis
 IN PROC. 21ST ANN. IEEE CONF. ON COMPUTATIONAL COMPLEXITY (CCC ’06
, 2006
"... We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation in floating point, similar to the “long exponent model ” that has been studied in the numerical computing community. We show that both of these approaches hinge on the question of understanding the complexity of the following problem, which we call PosSLP: Given a divisionfree straightline program producing an integer N, decide whether N> 0. • In the BlumShubSmale model, polynomial time computation over the reals (on discrete inputs) is polynomialtime equivalent to PosSLP, when there are only algebraic constants. We conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and we present some preliminary results supporting this conjecture. • The Generic Task of Numerical Computation is also polynomialtime equivalent to PosSLP. We prove that PosSLP lies in the counting hierarchy. Combining this with work of Tiwari, we obtain that the Euclidean Traveling Salesman Problem lies in the counting hierarchy – the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE. In the course of developing the context for our results on arithmetic circuits, we present some new observations on the complexity of ACIT: the Arithmetic Circuit Identity Testing problem. In particular, we show that if n! is not ultimately easy, then ACIT has subexponential complexity.
Probabilistic XML via Markov Chains
, 2009
"... We show how Recursive Markov Chains (RMCs) and their restrictions can define probabilistic distributions over XML documents, and study tractability ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We show how Recursive Markov Chains (RMCs) and their restrictions can define probabilistic distributions over XML documents, and study tractability
Quasibirthdeath processes, TreeLike QBDs, probabilistic 1counter automata, and pushdown systems
, 2008
"... We begin by observing that (discretetime) QuasiBirthDeath Processes (QBDs) are equivalent, in a precise sense, to (discretetime) probabilistic 1Counter Automata (p1CAs), and both TreeLike QBDs (TLQBDs) and TreeStructured QBDs (TSQBDs) are equivalent to both probabilistic Pushdown Systems ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We begin by observing that (discretetime) QuasiBirthDeath Processes (QBDs) are equivalent, in a precise sense, to (discretetime) probabilistic 1Counter Automata (p1CAs), and both TreeLike QBDs (TLQBDs) and TreeStructured QBDs (TSQBDs) are equivalent to both probabilistic Pushdown Systems
Computing Equilibria by Incorporating Qualitative Models
 In Proceedings of the Ninth International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2009). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems
"... IIS0905390. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also acknowledge Intel Corporation and IBM for their machine gifts. Keywords: Game theory, con ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
IIS0905390. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also acknowledge Intel Corporation and IBM for their machine gifts. Keywords: Game theory, continuous games, games of imperfect information, equilibrium We present a new approach for solving large (even infinite) multiplayer games of imperfect information. The key idea behind our approach is that we include additional inputs in the form of qualitative models of equilibrium strategies (how the signal space should be qualitatively partitioned into action regions). In addition, we show that our approach can lead to strong strategies in large finite games that we approximate with infinite games. We prove that our main algorithm is correct even if given a set of qualitative models (satisfying a technical property) of which only some are accurate. We also show how to check the output in settings where all of the models might be wrong (under a weak assumption). Our algorithms can compute equilibria in several classes of games for which no prior algorithms have been developed, and we demonstrate that they run efficiently in practice. In the course of our analysis, we also develop the first mixedinteger programming formulations for computing an epsilonequilibrium in general multiplayer normal and extensiveform
A Revealed Preference Approach to Computational Complexity in Economics
, 2010
"... One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
One of the main building blocks of economics is the theory of the consumer, which postulates that consumers are utility maximizing. However, from a computational perspective, this model is called into question because the task of utility maximization subject to a budget constraint is computationally hard in the worstcase under reasonable assumptions. In this paper, we study the empirical consequences of strengthening consumer choice theory to enforce that utilities are computationally easy to maximize. We prove the possibly surprising result that computational constraints have no empirical consequences whatsoever for consumer choice theory. That is, a data set is consistent with a utility maximizing consumer if and only if a data set is consistent with a utility maximizing consumer having a utility function that can be maximized in strongly polynomial time. Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. The approach complements the conventional worstcase view of computational complexity in important ways, and is methodologically close to mainstream economics.
Computing equilibria: A computational complexity perspective
, 2009
"... Computational complexity is the subfield of computer science that rigorously studies the intrinsic difficulty of computational problems. This survey explains how complexity theory defines “hard problems”; applies these concepts to several equilibrium computation problems; and discusses implications ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Computational complexity is the subfield of computer science that rigorously studies the intrinsic difficulty of computational problems. This survey explains how complexity theory defines “hard problems”; applies these concepts to several equilibrium computation problems; and discusses implications for computation, games, and behavior. We assume
Market Equilibrium under Separable, PiecewiseLinear, Concave Utilities
"... We consider Fisher and ArrowDebreu markets under additivelyseparable, piecewiselinear, concave utility functions, and obtain the following results: • For both market models, if an equilibrium exists, there is one that is rational and can be written using polynomially many bits. • There is no simp ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We consider Fisher and ArrowDebreu markets under additivelyseparable, piecewiselinear, concave utility functions, and obtain the following results: • For both market models, if an equilibrium exists, there is one that is rational and can be written using polynomially many bits. • There is no simple necessary and sufficient condition for the existence of an equilibrium: The problem of checking for existence of an equilibrium is NPcomplete for both market models; the same holds for existence of an ɛapproximate equilibrium, for ɛ = O(n −5). • Under standard (mild) sufficient conditions, the problem of finding an exact equilibrium is in PPAD for both market models. • Finally, building on the techniques of [CDDT09] we prove that under these sufficient conditions, finding an equilibrium for Fisher markets is PPADhard.