Results 11 
18 of
18
Basis and Tripartition Identification for Quadratic Programming and Linear Complementarity Problems  From an interior solution to an optimal basis and viceversa
, 1996
"... Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexb ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexbased pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A tripartition identification algorithm is an algorithm which generates a maximal complementary solution (and its corresponding tripartition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal...
The Finite CrissCross Method for Hyperbolic Programming
 INFORMATICA, TECHNISCHE UNIVERSITEIT DELFT, THE NETHERLANDS
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm.
Finite Pivot Algorithms and Feasibility
, 2001
"... This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig’s largestcoefficient simplex method, Bland’s smallestindex rule, and the leastindex crisscross method. We present the b'rule: a simple ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig’s largestcoefficient simplex method, Bland’s smallestindex rule, and the leastindex crisscross method. We present the b'rule: a simple algorithmbased on Bland’s smallest index rule for solving systems of linear inequalities (feasibility of linear programs). We prove that the b'rule is finite, from which we then prove Farka’s Lemma, the Duality Theorem for Linear Programming, and the Fundamental Theorem of Linear Inequalities. We present experimental results that compare the speed of the b'rule to the classical methods.
CrissCross Pivoting Rules
"... . Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then there are some primal or dual infeasible variables. One might choose any of these. It is advised to choose once a primal and then a dual infeasible variable, if possible. ffl If the selected variable is dual infeasible, then it enters the basis and the leaving variable is chosen among the primal feasible variables in such a way that primal feasibility of the currently primal feasible variables is preserved. If no such basis exchange is possible another infeasible variable is selected. ffl If the selected variable is primal infeasible, then it leaves the basis and the entering variable is chosen among th
Combinatorial Maximum Improvement Algorithm for LP and LCP
, 1995
"... this paper, we show how one can design new pivot algorithms for solving the LP and the LCP. In particular, we are interested in combinatorial pivot algorithms which solve the LP and a certain class of LCP's. Here, a pivot algorithm is called combinatorial if the pivot choice depends only on the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper, we show how one can design new pivot algorithms for solving the LP and the LCP. In particular, we are interested in combinatorial pivot algorithms which solve the LP and a certain class of LCP's. Here, a pivot algorithm is called combinatorial if the pivot choice depends only on the signs of entries of their dictionaries. The best source of combinatorial pivot algorithms is in the theory of oriented matroid (OM) programming [Bla77a, Edm94, Fuk82, FT92, LL86, Ter87, Tod85, Wan87]. The wellknown Bland's pivot rule [Bla77b] for the simplex method can be considered as a combinatorial algorithm, but it is not a typical one. The main characteristic of the "OM" algorithms is that the feasibility may not be preserved at all in both primal and dual problem, and the finiteness of the algorithms is guaranteed by some purely combinatorial improvement argument rather than by the reasoning based on the increment of the objective function value. One immediate advantage of combinatorial algorithms is that the degeneracy does not have to be treated separately. Thus a very simple combinatorial algorithm, such as the crisscross method [Ter87, Wan87], solves the general LP correctly and yields one of the simplest proofs of the strong duality theorem. There is a wellnoted disadvantage of combinatorial algorithms. The number of pivot operations to solve the LP tends to grow rapidly in practice. Furthermore it is often quite easy to construct a class of LP's for which a given combinatorial algorithm takes an exponential number of pivot operations in the input size. In this paper, we review the finiteness proof of combinatorial algorithms and study a new algorithm in the class. The key ingredients of the new algorithm are "history dependency" and "largest combinatorial improveme...
Recollections on the discovery of the reverse search technique
"... Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, suppo ..."
Abstract
 Add to MetaCart
(Show Context)
Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, supported by a JSPS/NSERC bilateral exchange. Komei was then working at the University of Tsukuba, Otsuka, which was a couple of subway stations from my office. So we met quite often. One day Komei visited me at my office in Todai and explained to me the crisscross method for solving linear programs, independently developed by S. Zionts [11], T. Terlaky [8, 9] and Z. Wang [10]. In this method one pivots in the hyperplane arrangement generated by the constraints of the linear program, without regard for feasibility thus differentiating it from the simplex method. Komei and Tomomi Matsui, a Ph.D. student at Tokyo Institute of Technology, had developed an elegant new proof of the convergence of the crisscross method [6], which Komei was explaining to me. Komei had drawn a linearrangement on the blackboard, along with the path the crisscross method would take from any given vertex to the optimum vertex of the LP. On the board all of these edges were shown in yellow with directions that eventually
Edmonds Fukuda Rule And A General Recursion For Quadratic Programming
"... A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Ma ..."
Abstract
 Add to MetaCart
A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Matroid programming problems. The validity of this algorithm is proved the same way as the finiteness of the crisscross method is proved. The second part of this paper contains a generalization of EdmondsFukuda pivoting rule for quadratic programming. This generalization can be considered as a finite version of Van de Panne  Whinston algorithm and so it is a simplex method for quadratic programming. These algorithms uses general combinatorial type ideas, so the same methods can be applied for oriented matroids as well. The generalization of these methods for oriented matroids is a subject of another paper.