Results 1  10
of
10
Some Generalizations Of The CrissCross Method For Quadratic Programming
 MATH. OPER. UND STAT. SER. OPTIMIZATION
, 1992
"... Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite crisscross method, based on leastindex resolution, is constructed for solving the LCP. In proving finiteness, orthogonality properties of pivot tableaus and positive semidefiniteness of quadratic matrices are used. In the last section some special cases and two further variants of the quadratic crisscross method are discussed. If the matrix of the LCP has full rank, then a surprisingly simple algorithm follows, which coincides with Murty's `Bard type schema' in the P matrix case.
A Survey on Pivot Rules for Linear Programming
 ANNALS OF OPERATIONS RESEARCH. (SUBMITTED
, 1991
"... The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. Th ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. There are some other important topics in linear programming, e.g. complexity theory or implementations, that are not included in the scope of this paper. We do not discuss ellipsoid methods nor interior point methods. Well known classical results concerning the simplex method are also not particularly discussed in this survey, but the connection between the new methods and the classical ones are discussed if there is any. In this paper we discuss three classes of recently developed pivot rules for linear programming. The first class (the largest one) of the pivot rules we discuss is the class of essentially combinatorial pivot rules. Namely these rules only use labeling and signs of the variab...
On the Finiteness of the CrissCross Method
 European Journal of Operations Research
, 1989
"... . In this short paper, we prove the finiteness of the crisscross method by showing a certain binary number of bounded digits associated with each iteration increases monotonically. This new proof immediately suggests the possibility of relaxing the pivoting selection in the crisscross method witho ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. In this short paper, we prove the finiteness of the crisscross method by showing a certain binary number of bounded digits associated with each iteration increases monotonically. This new proof immediately suggests the possibility of relaxing the pivoting selection in the crisscross method without sacrificing the finiteness. Key Words: linear programming. simplex method, finite pivoting rules. 1 The CrissCross Method Let A be an m2 n matrix. Let E be the index set of columns of the matrix A; and f; g be two distinct members of E: Here we consider the standard form linear program: (P ) maximize x f (1.1) subject to A x = 0; (1.2) x g = 1; (1.3) x j 0; 8 j 2 E 0 ff; gg: (1.4) A vector x is said to be feasible if it satisfies the constraints (1.2), (1.3), and (1.4). If a linear program has a feasible solution, then it is called feasible, otherwise it is called infeasible. For any linear program, we will refer to following three situations as characters: 3 Supported by Grant...
The Role of Pivoting in Proving Some Fundamental Theorems of Linear Algebra
 Linear Algebra and Its Applications 151
, 1991
"... This paper contains a new approach to some classical theorems of linear algebra (Steinitz, matrix rank, RoucheKroneckerCapelli, Farkas, Weyl, Minkowski). The constructive proofs are based on pivoting. Defining pivoting in a more general way  using generating tableaux  made it possible to give a ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper contains a new approach to some classical theorems of linear algebra (Steinitz, matrix rank, RoucheKroneckerCapelli, Farkas, Weyl, Minkowski). The constructive proofs are based on pivoting. Defining pivoting in a more general way  using generating tableaux  made it possible to give a new proof for Steinitz theorem as well. Our pivot selection strategies are based essentially on Bland's [2] minimal index rule. The famous theorems of Farkas, Weyl and Minkowski are proved by using pivot tableaux. Theorem 4.1 is essentially a new, very simple form of the alternative theorem of linear inequalities, and its proof is a pretty application of the minimal index rule. One can apply this theorem and its proof to combinatorial structures (for example to oriented matroids) as well (KlafszkyTerlaky [9]). The presented algorithms are mainly not efficient computationally (see e.g. Roos [13] for an exponential example), but they are surpisingly simple. We will use the symbols 0; +; \Gamma; \Phi; \Psi introduced by BalinskiTucker [1], which denote zero, positive, negative, nonnegative and nonpositive numbers respectively. On the other hand Gale's [7] notations will be used, so matrices and vectors are denoted by capital and small Latin letters and their components are denoted by the corresponding Greek letters. Index sets are denoted by I and J (with proper subscripts) and the cardinality of an index set J is denoted by k J k. 2 Pivoting
New Variants Of Finite CrissCross Pivot Algorithms For Linear Programming
, 1997
"... In this paper we generalize the socalled firstinlastout pivot rule and the mostoftenselectedvariable pivot rule for the simplex method, as proposed in Zhang [13], to the crisscross pivot setting where neither the primal nor the dual feasibility is preserved. The finiteness of the new crisscr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we generalize the socalled firstinlastout pivot rule and the mostoftenselectedvariable pivot rule for the simplex method, as proposed in Zhang [13], to the crisscross pivot setting where neither the primal nor the dual feasibility is preserved. The finiteness of the new crisscross pivot variants is proven.
Edmonds Fukuda Rule And A General Recursion For Quadratic Programming
"... A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Matroid programmi ..."
Abstract
 Add to MetaCart
A general framework of nite algorithms is presented here for quadratic programming. This algorithm is a direct generalization of Van der Heyden's algorithm for the linear complementarity problem and Jensen's `relaxed recursive algorithm', which was proposed for solution of Oriented Matroid programming problems. The validity of this algorithm is proved the same way as the finiteness of the crisscross method is proved. The second part of this paper contains a generalization of EdmondsFukuda pivoting rule for quadratic programming. This generalization can be considered as a finite version of Van de Panne  Whinston algorithm and so it is a simplex method for quadratic programming. These algorithms uses general combinatorial type ideas, so the same methods can be applied for oriented matroids as well. The generalization of these methods for oriented matroids is a subject of another paper.
CrissCross Pivoting Rules
"... . Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then th ..."
Abstract
 Add to MetaCart
. Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then there are some primal or dual infeasible variables. One might choose any of these. It is advised to choose once a primal and then a dual infeasible variable, if possible. ffl If the selected variable is dual infeasible, then it enters the basis and the leaving variable is chosen among the primal feasible variables in such a way that primal feasibility of the currently primal feasible variables is preserved. If no such basis exchange is possible another infeasible variable is selected. ffl If the selected variable is primal infeasible, then it leaves the basis and the entering variable is chosen among th
Recollections on the discovery of the reverse search technique
"... Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, suppo ..."
Abstract
 Add to MetaCart
Komei Fukuda and I discovered the idea for reverse search during conversations in Tokyo in October 1990. We were working on the vertex enumeration problem for convex polyhedra. At the time I was visiting Masao Iri at the University of Tokyo and Masakazu Kojima at Tokyo Institute of Technology, supported by a JSPS/NSERC bilateral exchange. Komei was then working at the University of Tsukuba, Otsuka, which was a couple of subway stations from my office. So we met quite often. One day Komei visited me at my office in Todai and explained to me the crisscross method for solving linear programs, independently developed by S. Zionts [11], T. Terlaky [8, 9] and Z. Wang [10]. In this method one pivots in the hyperplane arrangement generated by the constraints of the linear program, without regard for feasibility thus differentiating it from the simplex method. Komei and Tomomi Matsui, a Ph.D. student at Tokyo Institute of Technology, had developed an elegant new proof of the convergence of the crisscross method [6], which Komei was explaining to me. Komei had drawn a linearrangement on the blackboard, along with the path the crisscross method would take from any given vertex to the optimum vertex of the LP. On the board all of these edges were shown in yellow with directions that eventually