Results 1  10
of
83
Worstcase equilibria
 IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 1999
"... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..."
Abstract

Cited by 774 (16 self)
 Add to MetaCart
In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.
Fast Contact Force Computation for Nonpenetrating Rigid Bodies
, 1994
"... A new algorithm for computing contact forces between solid objects with friction is presented. The algorithm allows a mix of contact points with static and dynamic friction. In contrast to previous approaches, the problem of computing contact forces is not transformed into an optimization problem. B ..."
Abstract

Cited by 264 (7 self)
 Add to MetaCart
A new algorithm for computing contact forces between solid objects with friction is presented. The algorithm allows a mix of contact points with static and dynamic friction. In contrast to previous approaches, the problem of computing contact forces is not transformed into an optimization problem. Because of this, the need for sophisticated optimization software packages is eliminated. For both systems with and without friction, the algorithm has proven to be considerably faster, simpler, and more reliable than previous approaches to the problem. In particular, implementation of the algorithm by nonspecialists in numerical programming is quite feasible.
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 199 (40 self)
 Add to MetaCart
(Show Context)
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
On the complexity of the parity argument and other inefficient proofs of existence
 JCSS
, 1994
"... We define several new complexity classes of search problems, "between " the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is define ..."
Abstract

Cited by 174 (7 self)
 Add to MetaCart
We define several new complexity classes of search problems, "between " the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph. The existence of the solution sought is established via a simple graphtheoretic argument with an inefficiently constructive proof; for example, PLS can be thought of as corresponding to the lemma "every dag has a sink. " The new classes are based on lemmata such as "every graph has an even number of odddegree nodes. " They contain several important problems for which no polynomial time algorithm is presently known, including the computational versions of Sperner's lemma, Brouwer's fixpoint theorem, Chfvalley's theorem, and the BorsukUlam theorem, the linear complementarity problem for Pmatrices, finding a mixed equilibrium in a nonzero sum game, finding a second Hamilton circuit in a Hamiltonian cubic graph, a second Hamiltonian decomposition in a quartic graph, and others. Some of these problems are shown to be complete. © 1994 Academic Press, Inc. 1.
X.: Settling the complexity of twoplayer Nash equilibrium
 In: Proceedings of 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06
, 2006
"... ..."
(Show Context)
Variational inequalities and the pricing of American options
 ACTA APPL. MATH
, 1990
"... This paper is devoted to the derivation of some regularity properties of pricing functions for American options and to the discussion of numerical methods, based on the BensoussanLions methods of variational inequalities. In particular, we provide a complete justification of the socalled BrennanS ..."
Abstract

Cited by 75 (3 self)
 Add to MetaCart
This paper is devoted to the derivation of some regularity properties of pricing functions for American options and to the discussion of numerical methods, based on the BensoussanLions methods of variational inequalities. In particular, we provide a complete justification of the socalled BrennanSchwartz algorithm for the valuation of American put options.
Interfaces to PATH 3.0: Design, Implementation and Usage
 Computational Optimization and Applications
, 1998
"... Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using objectoriented design techniques. At the same time, robustness issues ..."
Abstract

Cited by 56 (19 self)
 Add to MetaCart
(Show Context)
Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using objectoriented design techniques. At the same time, robustness issues were considered and enhancements made to the algorithm. In this paper, we document the external interfaces to the PATH code and describe some of the new utilities using PATH. We then discuss the enhancements made and compare the results obtained from PATH 2.9 to the new version. 1 Introduction The PATH solver [12] for mixed complementarity problems (MCPs) was introduced in 1995 and has since become the standard against which new MCP solvers are compared. However, the main user group for PATH continues to be economists using the MPSGE preprocessor [36]. While developing the new PATH implementation, we had two goals: to make the solver accessible to a broad audience and to improve the effecti...
ERROR BOUND AND CONVERGENCE ANALYSIS OF MATRIX SPLITTING ALGORITHMS FOR THE AFFINE VARIATIONAL INEQUALITY PROBLEM
, 1992
"... Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for so ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for solving the symmetric case of the problem. This latter result improves upon a recent result of Luo and Tseng that further assumes the problem to be monotone.
Algorithms For Complementarity Problems And Generalized Equations
, 1995
"... Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult pr ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult problems are being proposed that exceed the capabilities of even the best algorithms currently available. There is, therefore, an immediate need to improve the capabilities of complementarity solvers. This thesis addresses this need in two significant ways. First, the thesis proposes and develops a proximal perturbation strategy that enhances the robustness of Newtonbased complementarity solvers. This strategy enables algorithms to reliably find solutions even for problems whose natural merit functions have strict local minima that are not solutions. Based upon this strategy, three new algorithms are proposed for solving nonlinear mixed complementarity problems that represent a significant improvement in robustness over previous algorithms. These algorithms have local Qquadratic convergence behavior, yet depend only on a pseudomonotonicity assumption to achieve global convergence from arbitrary starting points. Using the MCPLIB and GAMSLIB test libraries, we perform extensive computational tests that demonstrate the effectiveness of these algorithms on realistic problems. Second, the thesis extends some previously existing algorithms to solve more general problem classes. Specifically, the NE/SQP method of Pang & Gabriel (1993), the semismooth equations approach of De Luca, Facchinei & Kanz...
Solution of General Linear Complementarity Problems via Nondifferentiable Concave Minimization
 Acta Mathematica Vietnamica
, 1997
"... Finite termination, at point satisfying the minimum principle necessary optimality condition, is established for a stepless (no line search) successive linearization algorithm (SLA) for minimizing a nondifferentiable concave function on a polyhedral set. The SLA is then applied to the general linear ..."
Abstract

Cited by 35 (13 self)
 Add to MetaCart
(Show Context)
Finite termination, at point satisfying the minimum principle necessary optimality condition, is established for a stepless (no line search) successive linearization algorithm (SLA) for minimizing a nondifferentiable concave function on a polyhedral set. The SLA is then applied to the general linear complementarity problem (LCP), formulated as minimizing a piecewiselinear concave error function on the usual polyhedral feasible region defining the LCP. When the feasible region is nonempty, the concave error function always has a global minimum at a vertex, and the minimum is zero if and only if the LCP is solvable. The SLA terminates at a solution or stationary point of the problem in a finite number of steps. A special case of the proposed algorithm [8] solved without failure 80 consecutive cases of the LCP formulation of the knapsack feasibilty problem, ranging in size between 10 and 3000. 1 Introduction We consider the classical linear complementarity problem (LCP) [4, 12, 5] 0 x ?...