Results 1  10
of
75
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 147 (33 self)
 Add to MetaCart
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Engineering and economic applications of complementarity problems
 SIAM Review
, 1997
"... Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions f ..."
Abstract

Cited by 127 (24 self)
 Add to MetaCart
Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. The goal of this documentation is threefold: (i) to summarize the essential applications of the nonlinear complementarity problem known to date, (ii) to provide a basis for the continued research on the nonlinear complementarity problem, and (iii) to supply a broad collection of realistic complementarity problems for use in algorithmic experimentation and other studies.
COMPUTATION OF EQUILIBRIA in Finite Games
, 1996
"... We review the current state of the art of methods for numerical computation of Nash equilibria for nitenperson games. Classical path following methods, such as the LemkeHowson algorithm for two person games, and Scarftype fixed point algorithms for nperson games provide globally convergent metho ..."
Abstract

Cited by 118 (1 self)
 Add to MetaCart
We review the current state of the art of methods for numerical computation of Nash equilibria for nitenperson games. Classical path following methods, such as the LemkeHowson algorithm for two person games, and Scarftype fixed point algorithms for nperson games provide globally convergent methods for finding a sample equilibrium. For large problems, methods which are not globally convergent, such as sequential linear complementarity methods may be preferred on the grounds of speed. None of these methods are capable of characterizing the entire set of Nash equilibria. More computationally intensive methods, which derive from the theory of semialgebraic sets are required for finding all equilibria. These methods can also be applied to compute various equilibrium refinements.
Representations and Solutions for GameTheoretic Problems
 Artificial Intelligence
, 1997
"... A system with multiple interacting agents (whether artificial or human) is often best analyzed using gametheoretic tools. Unfortunately, while the formal foundations are wellestablished, standard computational techniques for gametheoretic reasoning are inadequate for dealing with realistic games. ..."
Abstract

Cited by 115 (0 self)
 Add to MetaCart
A system with multiple interacting agents (whether artificial or human) is often best analyzed using gametheoretic tools. Unfortunately, while the formal foundations are wellestablished, standard computational techniques for gametheoretic reasoning are inadequate for dealing with realistic games. This paper describes the Gala system, an implemented system that allows the specification and efficient solution of large imperfect information games. The system contains the first implementation of a recent algorithm, due to Koller, Megiddo, and von Stengel. Experimental results from the system demonstrate that the algorithm is exponentially faster than the standard algorithm in practice, not just in theory. It therefore allows the solution of games that are orders of magnitude larger than were previously possible. The system also provides a new declarative language for compactly and naturally representing games by their rules. As a whole, the Gala system provides the capability for automa...
Playing Large Games using Simple Strategies
, 2003
"... We prove the existence of #Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payo#s to all players in any (exact) Nash equilibrium can be #approximated by the payo#s to the players in some such logarithmic support #Nash equilibrium. These ..."
Abstract

Cited by 91 (1 self)
 Add to MetaCart
We prove the existence of #Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payo#s to all players in any (exact) Nash equilibrium can be #approximated by the payo#s to the players in some such logarithmic support #Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasipolynomial algorithm for computing an #Nash equilibrium. To our knowledge this is the first subexponential algorithm for finding an #Nash equilibrium. Our results hold for any multipleplayer game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a fixed number of players m, the payo#s to all players in any mtuple of mixed strategies can be #approximated by the payo#s in some mtuple of constant support strategies.
Fast Algorithms for Finding Randomized Strategies in Game Trees
, 1994
"... Interactions among agents can be conveniently described by game trees. In order to analyze a game, it is important to derive optimal (or equilibrium) strategies for the different players. The standard approach to finding such strategies in games with imperfect information is, in general, computation ..."
Abstract

Cited by 89 (11 self)
 Add to MetaCart
Interactions among agents can be conveniently described by game trees. In order to analyze a game, it is important to derive optimal (or equilibrium) strategies for the different players. The standard approach to finding such strategies in games with imperfect information is, in general, computationally intractable. The approach is to generate the normal form of the game (the matrix containing the payoff for each strategy combination), and then solve a linear program (LP) or a linear complementarity problem (LCP). The size of the normal form, however, is typically exponential in the size of the game tree, thus making this method impractical in all but the simplest cases. This paper describes a new representation of strategies which results in a practical linear formulation of the problem of twoplayer games with perfect recall (i.e., games where players never forget anything, which is a standard assumption). Standard LP or LCP solvers can then be applied to find optimal randomized strategies. The resulting algorithms are, in general, exponentially better than the standard ones, both in terms of time and in terms of space.
Efficient Computation of Equilibria for Extensive Twoperson Games
, 1996
"... The Nash equilibria of a twoperson, nonzerosum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, the game must first be converted to a strategic description such as the normal form. The classical normal form, howev ..."
Abstract

Cited by 85 (7 self)
 Add to MetaCart
The Nash equilibria of a twoperson, nonzerosum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, the game must first be converted to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. If the game has perfect recall, a linearsized strategic description is the sequence form. For the resulting small LCP, we show that an equilibrium is found efficiently by Lemke’s algorithm, a generalization of the Lemke–Howson method.
Coping with Friction for Nonpenetrating Rigid Body Simulation
, 1991
"... Algorithms and computational complexity measures for simulating the motion of contacting bodies with friction are presented. The bodies are restricted to be perfectly rigid bodies that contact at finitely many points. Contact forces between bodies must satisfy the Coulomb model of friction. A tradit ..."
Abstract

Cited by 81 (0 self)
 Add to MetaCart
Algorithms and computational complexity measures for simulating the motion of contacting bodies with friction are presented. The bodies are restricted to be perfectly rigid bodies that contact at finitely many points. Contact forces between bodies must satisfy the Coulomb model of friction. A traditional principle of mechanics is that contact forces are impulsive if and only if nonimpulsive contact forces are insufficient to maintain the nonpenetration constraints between bodies. When friction is allowed, it is known that impulsive contact forces can be necessary even in the absence of collisions between bodies. This paper shows that computing contact forces according to this traditional principle is likely to require exponential time. An analysis of this result reveals that the principle for when impulses can occur is too restrictive, and a natural reformulation of the principle is proposed. Using the reformulated principle, an algorithm with expected polynomial time behavior for co...
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 74 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the