Results 11 
16 of
16
Combinatorial Maximum Improvement Algorithm for LP and LCP
, 1995
"... this paper, we show how one can design new pivot algorithms for solving the LP and the LCP. In particular, we are interested in combinatorial pivot algorithms which solve the LP and a certain class of LCP's. Here, a pivot algorithm is called combinatorial if the pivot choice depends only on the sign ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this paper, we show how one can design new pivot algorithms for solving the LP and the LCP. In particular, we are interested in combinatorial pivot algorithms which solve the LP and a certain class of LCP's. Here, a pivot algorithm is called combinatorial if the pivot choice depends only on the signs of entries of their dictionaries. The best source of combinatorial pivot algorithms is in the theory of oriented matroid (OM) programming [Bla77a, Edm94, Fuk82, FT92, LL86, Ter87, Tod85, Wan87]. The wellknown Bland's pivot rule [Bla77b] for the simplex method can be considered as a combinatorial algorithm, but it is not a typical one. The main characteristic of the "OM" algorithms is that the feasibility may not be preserved at all in both primal and dual problem, and the finiteness of the algorithms is guaranteed by some purely combinatorial improvement argument rather than by the reasoning based on the increment of the objective function value. One immediate advantage of combinatorial algorithms is that the degeneracy does not have to be treated separately. Thus a very simple combinatorial algorithm, such as the crisscross method [Ter87, Wan87], solves the general LP correctly and yields one of the simplest proofs of the strong duality theorem. There is a wellnoted disadvantage of combinatorial algorithms. The number of pivot operations to solve the LP tends to grow rapidly in practice. Furthermore it is often quite easy to construct a class of LP's for which a given combinatorial algorithm takes an exponential number of pivot operations in the input size. In this paper, we review the finiteness proof of combinatorial algorithms and study a new algorithm in the class. The key ingredients of the new algorithm are "history dependency" and "largest combinatorial improveme...
Edmonds Fukuda Rule And A General Recursion For Quadratic Programming
"... A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Matroid programmi ..."
Abstract
 Add to MetaCart
A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Matroid programming problems. The validity of this algorithm is proved the same way as the finiteness of the crisscross method is proved. The second part of this paper contains a generalization of EdmondsFukuda pivoting rule for quadratic programming. This generalization can be considered as a finite version of Van de Panne  Whinston algorithm and so it is a simplex method for quadratic programming. These algorithms uses general combinatorial type ideas, so the same methods can be applied for oriented matroids as well. The generalization of these methods for oriented matroids is a subject of another paper.
The Finite CrissCross Method for Hyperbolic Programming
 Informatica, Technische Universiteit Delft, The Netherlands
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm. Key words: hyperbolic programming, pivoting, crisscross method iii 1 Introduction The hyperbolic (fractional linear) programming problem is a natural generalization of the linear programming problem. The linear constraints are kept, but the linear objective function is replaced by a quotient of two linear functions. Such fractional linear objective functions arise in economical models when the goal is to optimize profit/allocation type functions (see for instance [12]). The objective function of the hyperbolic programming problem is neither linear nor convex, however there are several ...
CrissCross Pivoting Rules
"... . Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then th ..."
Abstract
 Add to MetaCart
. Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then there are some primal or dual infeasible variables. One might choose any of these. It is advised to choose once a primal and then a dual infeasible variable, if possible. ffl If the selected variable is dual infeasible, then it enters the basis and the leaving variable is chosen among the primal feasible variables in such a way that primal feasibility of the currently primal feasible variables is preserved. If no such basis exchange is possible another infeasible variable is selected. ffl If the selected variable is primal infeasible, then it leaves the basis and the entering variable is chosen among th
On Circuit Valuation of Matroids
, 2000
"... The concept of valuated matroids was introduced by Dress and Wenzel as a quantitative extension of the base exchange axiom for matroids. This paper gives several sets of cryptomorphically equivalent axioms of valuated matroids in terms of (R[f01g)valued vectors defined on the circuits of the un ..."
Abstract
 Add to MetaCart
The concept of valuated matroids was introduced by Dress and Wenzel as a quantitative extension of the base exchange axiom for matroids. This paper gives several sets of cryptomorphically equivalent axioms of valuated matroids in terms of (R[f01g)valued vectors defined on the circuits of the underlying matroid, where R is a totally ordered additive group. The dual of a valuated matroid is characterized by an orthogonality of (R [ f01g) valued vectors on circuits. Minty's characterization for matroids by the painting property is generalized for valuated matroids.
Recollections on the discovery of the reverse search technique
"... Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, suppo ..."
Abstract
 Add to MetaCart
Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, supported by a JSPS/NSERC bilateral exchange. Komei was then working at the University of Tsukuba, Otsuka, which was a couple of subway stations from my office. So we met quite often. One day Komei visited me at my office in Todai and explained to me the crisscross method for solving linear programs, independently developed by S. Zionts [11], T. Terlaky [8, 9] and Z. Wang [10]. In this method one pivots in the hyperplane arrangement generated by the constraints of the linear program, without regard for feasibility thus differentiating it from the simplex method. Komei and Tomomi Matsui, a Ph.D. student at Tokyo Institute of Technology, had developed an elegant new proof of the convergence of the crisscross method [6], which Komei was explaining to me. Komei had drawn a linearrangement on the blackboard, along with the path the crisscross method would take from any given vertex to the optimum vertex of the LP. On the board all of these edges were shown in yellow with directions that eventually