Results 1 
9 of
9
Linear Complementarity and Oriented Matroids
 Journal of the Operational Research Society of Japan
, 1990
"... A combinatorial abstraction of the linear complementarity theory in the setting of oriented matroids was rst considered by M.J. Todd. In this paper, we take a fresh look at this abstraction, and attempt to give a simple treatment of the combinatorial theory of linear complementarity. We obtain new t ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
A combinatorial abstraction of the linear complementarity theory in the setting of oriented matroids was rst considered by M.J. Todd. In this paper, we take a fresh look at this abstraction, and attempt to give a simple treatment of the combinatorial theory of linear complementarity. We obtain new theorems, proofs and algorithms in oriented matroids whose specializations to the linear case are also new. For this, the notion of suciency of square matrices, introduced by Cottle, Pang and Venkateswaran, is extended to oriented matroids. Then, we prove a sort of duality theorem for oriented matroids, which roughly states: exactly one of the primal and the dual system has a complementary solution if the associated oriented matroid satisfies "weak" sufficiency. We give two different proofs for this theorem, an elementary inductive proof and an algorithmic proof using the crisscross method which solves one of the primal or dual problem by using surprisingly simple pivot rules (without any pertur...
Solution of FiniteDimensional Variational Inequalities Using Smooth Optimization with Simple Bounds
, 1997
"... . The variational inequality problem is reduced to an optimization problem with a differentiable objective function and simple bounds. Theoretical results are proved, that relate stationary points of the minimization problem to solutions of the variational inequality problem. Perturbations of the or ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
. The variational inequality problem is reduced to an optimization problem with a differentiable objective function and simple bounds. Theoretical results are proved, that relate stationary points of the minimization problem to solutions of the variational inequality problem. Perturbations of the original problem are studied and an algorithm that uses the smooth minimization approach for solving monotone problems is defined. Key words. Variational inequalities, box constrained optimization, complementarity. 1 Introduction Let\Omega be a nonempty, closed and convex subset of IR n and F : IR n ! IR n . The finitedimensional variational inequality problem, denoted by VIP, is to find a vector x 2\Omega such that hF (x); w \Gamma xi 0; for all w 2\Omega : (1) This problem has many interesting applications and its solution using special techniques has been considered extensively in the literature; see, for example, (Ref. 1) and references therein. The linear and nonlinear comp...
An Interior Point Potential Reduction Method for Constrained Equations
, 1995
"... We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In gen ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In general, constrained equations provide a unified formulation for many mathematical programming problems, including complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities and nonlinear programs. Combining ideas from the damped Newton and interior point methods, we present an iterative algorithm for solving a constrained system of equations and investigate its convergence properties. Specialization of the algorithm and its convergence analysis to complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities are discussed in detail. We also report the computational results of the implementation of the algo...
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappi ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
A Long Step Barrier Method for Convex Quadratic Programming
 Algorithmica
, 1990
"... In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the cent ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations is O( p nL) or O(nL), dependent on how the barrier parameter is updated. Key Words: convex quadratic programming, interior point method, logarithmic barrier function, polynomial algorithm. 1 Introduction Karmarkar's [14] invention of the projective method for linear programming has given rise to active research in interior point algorithms. At this moment, the variants can roughly be categorized into four classes: projective, affine scaling, pathfollowing and potential reduction methods. Researchers have also extended interior point methods to other problems, including convex qu...
A Strongly Polynomial Rounding Procedure Yielding a Maximally Complementary Solution for P*(κ) Linear Complementarity Problems
, 1998
"... We deal with Linear Complementarity Problems (LCPs) with P () matrices. First we establish the convergence rate of the complementary variables along the central path. The central path is parameterized by the barrier parameter , as usual. Our elementary proof reproduces the known result that the var ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We deal with Linear Complementarity Problems (LCPs) with P () matrices. First we establish the convergence rate of the complementary variables along the central path. The central path is parameterized by the barrier parameter , as usual. Our elementary proof reproduces the known result that the variables on, or close to the central path fall apart in three classes in which these variables are O(1); O() and O( p ), respectively. The constants hidden in these bounds are expressed in, or bounded by, the input data. All this is preparation for our main result: a strongly polynomial rounding procedure. Given a point with sufficiently small complementarity gap and close enough to the central path, the rounding procedure produces a maximally complementary solution in at most O(n³) arithmetic operations. The result implies that Interior Point Methods (IPMs) not only converge to a complementary solution of P () LCPs but, when furnished with our rounding procedure, they can produce a max...
Interior Point Methods For Global Optimization
 INTERIOR POINT METHODS OF MATHEMATICAL PROGRAMMING
, 1996
"... Interior point methods, originally invented in the context of linear programming, have found a much broader range of applications, including global optimization problems that arise in engineering, computer science, operations research, and other disciplines. This chapter overviews the conceptual bas ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Interior point methods, originally invented in the context of linear programming, have found a much broader range of applications, including global optimization problems that arise in engineering, computer science, operations research, and other disciplines. This chapter overviews the conceptual basis and applications of interior point methods for some classes of global optimization problems.
A Polynomial Method of Weighted Centers for Convex Quadratic Programming
 Journal of Information & Optimization Sciences
, 1991
"... A generalization of the weighted central pathfollowing method for convex quadratic programming is presented. This is done by uniting and modifying the main ideas of the weighted central pathfollowing method for linear programming and the interior point methods for convex quadratic programming. B ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A generalization of the weighted central pathfollowing method for convex quadratic programming is presented. This is done by uniting and modifying the main ideas of the weighted central pathfollowing method for linear programming and the interior point methods for convex quadratic programming. By means of the linear approximation of the weighted logarithmic barrier function and weighted inscribed ellipsoids, `weighted' trajectories are defined. Each strictly feasible primal dual point pair define such a weighted trajectory. The algorithm can start in any strictly feasible primaldual point pair that defines a weighted trajectory, which is followed through the algorithm. This algorithm has the nice feature, that it is not necessary to start the algorithm close to the central path and so additional transformations are not needed. In return, the theoretical complexity of our algorithm is dependent on the position of the starting point. Polynomiality is proved under the usual mild cond...
Two InteriorPoint Methods for Nonlinear
 J. Optim. Theory Appl
, 1999
"... . Two interiorpoint algorithms using a wide neighborhood of the central path are proposed to solve nonlinear P complementarity problems. The proof of the polynomial complexity of the first method requires that the problem satisfies a scaled Lipschitz condition. When specialized to the monotone co ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. Two interiorpoint algorithms using a wide neighborhood of the central path are proposed to solve nonlinear P complementarity problems. The proof of the polynomial complexity of the first method requires that the problem satisfies a scaled Lipschitz condition. When specialized to the monotone complementarity problems, the results of the first method are similar to the ones in Ref. 1. The second method is quite different from the first in that the proof of its global convergence does not require the scaled Lipschitz assumption. At each step of this algorithm, however, one has to compute an approximate solution of a nonlinear system such that a certain accuracy requirement is satisfied. Key Words. Interiorpoint algorithms, nonlinear P complementarity problems, polynomial complexity, scaled Lipschitz condition. 2 1. Introduction Consider the complementarity problem (CP), that is, finding a pair (x; u) 2 R n \Theta R n such that u = F (x); (x; u) 0 and x T u = 0; where F ...