Results 1  10
of
10
A Computational View of InteriorPoint Methods for Linear Programming
 IN: ADVANCES IN LINEAR AND INTEGER PROGRAMMING
, 1994
"... Many issues that are crucial for an efficient implementation of an interior point algorithm are addressed in this paper. To start with, a prototype primaldual algorithm is presented. Next, many tricks that make it so efficient in practice are discussed in detail. Those include: the preprocessing te ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
Many issues that are crucial for an efficient implementation of an interior point algorithm are addressed in this paper. To start with, a prototype primaldual algorithm is presented. Next, many tricks that make it so efficient in practice are discussed in detail. Those include: the preprocessing techniques, the initialization approaches, the methods of computing search directions (and lying behind them linear algebra techniques), centering strategies and methods of stepsize selection. Several reasons for the manifestations of numerical difficulties like e.g.: the primal degeneracy of optimal solutions or the lack of feasible solutions are explained in a comprehensive way. A motivation for obtaining an optimal basis is given and a practicable algorithm to perform this task is presented. Advantages of different methods to perform postoptimal analysis (applicable to interior point optimal solutions) are discussed. Important questions that still remain open in the implementations of i...
Some Generalizations Of The CrissCross Method For Quadratic Programming
 MATH. OPER. UND STAT. SER. OPTIMIZATION
, 1992
"... Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite crisscross method, based on leastindex resolution, is constructed for solving the LCP. In proving finiteness, orthogonality properties of pivot tableaus and positive semidefiniteness of quadratic matrices are used. In the last section some special cases and two further variants of the quadratic crisscross method are discussed. If the matrix of the LCP has full rank, then a surprisingly simple algorithm follows, which coincides with Murty's `Bard type schema' in the P matrix case.
The Theory of Linear Programming: Skew Symmetric SelfDual Problems and the Central Path
, 1994
"... The literature in the field of interior point methods for linear programming has been almost exclusively algorithm oriented. Recently Guler, Roos, Terlaky and Vial presented a complete duality theory for linear programming based on the interior point approach. In this paper we present a more simple ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
The literature in the field of interior point methods for linear programming has been almost exclusively algorithm oriented. Recently Guler, Roos, Terlaky and Vial presented a complete duality theory for linear programming based on the interior point approach. In this paper we present a more simple approach which is based on an embedding of the primal problem and its dual into a skew symmetric selfdual problem. This embedding is essentially due Ye, Todd and Mizuno. First we consider a skew symmetric selfdual linear program. We show that the strong duality theorem trivially holds in this case. Then, using the logarithmic barrier problem and the central path, the existence of a strictly complementary optimal solution is proved. Using the embedding just described, we easily obtain the strong duality theorem and the existence of strictly complementary optimal solutions for general linear programming problems. Key words. Linear programming, interior points, skew symmetric matrix, self...
Degeneracy in Interior Point Methods for Linear Programming
, 1991
"... ... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in numerical performance, and on finding basic solutions.
The Role of Pivoting in Proving Some Fundamental Theorems of Linear Algebra
 Linear Algebra and Its Applications 151
, 1991
"... This paper contains a new approach to some classical theorems of linear algebra (Steinitz, matrix rank, RoucheKroneckerCapelli, Farkas, Weyl, Minkowski). The constructive proofs are based on pivoting. Defining pivoting in a more general way  using generating tableaux  made it possible to give a ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper contains a new approach to some classical theorems of linear algebra (Steinitz, matrix rank, RoucheKroneckerCapelli, Farkas, Weyl, Minkowski). The constructive proofs are based on pivoting. Defining pivoting in a more general way  using generating tableaux  made it possible to give a new proof for Steinitz theorem as well. Our pivot selection strategies are based essentially on Bland's [2] minimal index rule. The famous theorems of Farkas, Weyl and Minkowski are proved by using pivot tableaux. Theorem 4.1 is essentially a new, very simple form of the alternative theorem of linear inequalities, and its proof is a pretty application of the minimal index rule. One can apply this theorem and its proof to combinatorial structures (for example to oriented matroids) as well (KlafszkyTerlaky [9]). The presented algorithms are mainly not efficient computationally (see e.g. Roos [13] for an exponential example), but they are surpisingly simple. We will use the symbols 0; +; \Gamma; \Phi; \Psi introduced by BalinskiTucker [1], which denote zero, positive, negative, nonnegative and nonpositive numbers respectively. On the other hand Gale's [7] notations will be used, so matrices and vectors are denoted by capital and small Latin letters and their components are denoted by the corresponding Greek letters. Index sets are denoted by I and J (with proper subscripts) and the cardinality of an index set J is denoted by k J k. 2 Pivoting
Basis and Tripartition Identification for Quadratic Programming and Linear Complementarity Problems  From an interior solution to an optimal basis and viceversa
, 1996
"... Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexb ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexbased pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A tripartition identification algorithm is an algorithm which generates a maximal complementary solution (and its corresponding tripartition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal...
Visualizing and constructing cycles in the simplex method, GERAD
 Journal of Operations Research
"... Les textes publiés dans la série des rapports de recherche HEC n’engagent que la responsabilité de leurs auteurs. La publication de ces rapports de recherche bénéficie d’une subvention du Fonds québécois de la recherche sur la nature et les technologies. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Les textes publiés dans la série des rapports de recherche HEC n’engagent que la responsabilité de leurs auteurs. La publication de ces rapports de recherche bénéficie d’une subvention du Fonds québécois de la recherche sur la nature et les technologies.
Degeneracy in Interior Point Methods for Linear Programming
, 1991
"... this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in numerical performance, and on finding basic solutions. Key words: Linear programming, interior point methods, degeneracy, polynomial algorithms, global and local convergence, basis recovery, numerical performance, sensitivity analysis. 1 Introduction
Can pure cutting plane algorithms work?
"... Abstract. We discuss an implementation of the lexicographic version of Gomory’s fractional cutting plane method and of two heuristics mimicking the latter. In computational testing on a battery of MIPLIB problems we compare the performance of these variants with that of the standard Gomory algorithm ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We discuss an implementation of the lexicographic version of Gomory’s fractional cutting plane method and of two heuristics mimicking the latter. In computational testing on a battery of MIPLIB problems we compare the performance of these variants with that of the standard Gomory algorithm, both in the singlecut and in the multicut (rounds of cuts) version, and show that they provide a radical improvement over the standard procedure. In particular, we report the exact solution of ILP instances from MIPLIB such as stein15, stein27, and bm23, for which the standard Gomory cutting plane algorithm is not able to close more than a tiny fraction of the integrality gap. We also offer an explanation for this surprizing phenomenon.