Results 1 
6 of
6
CrissCross Methods: A Fresh View on Pivot Algorithms
 Mathematical Programming
, 1997
"... this paper is to present mathematical ideas and ..."
A Strongly Polynomial Rounding Procedure Yielding a Maximally Complementary Solution for P*(κ) Linear Complementarity Problems
, 1998
"... We deal with Linear Complementarity Problems (LCPs) with P () matrices. First we establish the convergence rate of the complementary variables along the central path. The central path is parameterized by the barrier parameter , as usual. Our elementary proof reproduces the known result that the var ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We deal with Linear Complementarity Problems (LCPs) with P () matrices. First we establish the convergence rate of the complementary variables along the central path. The central path is parameterized by the barrier parameter , as usual. Our elementary proof reproduces the known result that the variables on, or close to the central path fall apart in three classes in which these variables are O(1); O() and O( p ), respectively. The constants hidden in these bounds are expressed in, or bounded by, the input data. All this is preparation for our main result: a strongly polynomial rounding procedure. Given a point with sufficiently small complementarity gap and close enough to the central path, the rounding procedure produces a maximally complementary solution in at most O(n³) arithmetic operations. The result implies that Interior Point Methods (IPMs) not only converge to a complementary solution of P () LCPs but, when furnished with our rounding procedure, they can produce a max...
The Finite CrissCross Method for Hyperbolic Programming
 Informatica, Technische Universiteit Delft, The Netherlands
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm. Key words: hyperbolic programming, pivoting, crisscross method iii 1 Introduction The hyperbolic (fractional linear) programming problem is a natural generalization of the linear programming problem. The linear constraints are kept, but the linear objective function is replaced by a quotient of two linear functions. Such fractional linear objective functions arise in economical models when the goal is to optimize profit/allocation type functions (see for instance [12]). The objective function of the hyperbolic programming problem is neither linear nor convex, however there are several ...
CrissCross Pivoting Rules
"... . Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then th ..."
Abstract
 Add to MetaCart
. Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then there are some primal or dual infeasible variables. One might choose any of these. It is advised to choose once a primal and then a dual infeasible variable, if possible. ffl If the selected variable is dual infeasible, then it enters the basis and the leaving variable is chosen among the primal feasible variables in such a way that primal feasibility of the currently primal feasible variables is preserved. If no such basis exchange is possible another infeasible variable is selected. ffl If the selected variable is primal infeasible, then it leaves the basis and the entering variable is chosen among th
Principal Pivoting Methods For Linear Complementarity Problems, PCPLCP
"... timization problem min ae c T x + 1 2 x T Qx : Ax b; x 0 oe ; where Q is a positive semidefinite, symmetric matrix, then M = ` 0 A \GammaA T Q ' and q = ` \Gammab c ' : Here M is a positive semidefinite bisymmetric matrix. Bisymmetry means that the matrix has a block diago ..."
Abstract
 Add to MetaCart
timization problem min ae c T x + 1 2 x T Qx : Ax b; x 0 oe ; where Q is a positive semidefinite, symmetric matrix, then M = ` 0 A \GammaA T Q ' and q = ` \Gammab c ' : Here M is a positive semidefinite bisymmetric matrix. Bisymmetry means that the matrix has a block diagonal structure, and it is the sum of a symmetric block diagonal positive semidefinite, and a skew symmetric block diagonal matrix. Some other classes of solvable LCPs are problems, when M is a ffl P matrix ; ffl sufficient matrix or, equivalently, a P
Polynomial AffineScaling Algorithms for P*(k) Linear Complementarity Problems
, 1997
"... A family of primaldual affinescaling algorithms is presented for Linear Complementarity Problems (LCP's) with P matrices. These algorithms were first introduced by Jansen et al. for solving linear optimization problems and later also applied to LCP's with positive semidefinite matrices. We show ..."
Abstract
 Add to MetaCart
A family of primaldual affinescaling algorithms is presented for Linear Complementarity Problems (LCP's) with P matrices. These algorithms were first introduced by Jansen et al. for solving linear optimization problems and later also applied to LCP's with positive semidefinite matrices. We show that the same algorithmic concept applies to LCP's with P matrices and that the resulting algorithms admit polynomialtime iteration bounds. and Key words: linear complementarity problems, P () matrices, affinescaling algorithms iii 1 Introduction In this paper we deal with a class of algorithms for solving the linear complementarity problem (LCP): \GammaM x + s = q; x 0; s 0; xs = 0: (1) where M is an n \Theta n real matrix, q 2 IR n and xs denotes the componentwise product of the unknown vectors x and s. We say that an algorithm solves (1) if, for given M and q, it either gives vectors x and s satisfying (1) or decides that no such vectors exist. The known methods for deali...