Results 1  10
of
55
Solving LargeScale Linear Programs by InteriorPoint Methods Under the MATLAB Environment
 Optimization Methods and Software
, 1996
"... In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of M ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparsematrix functions and external interface facilities, and of existing Fortran sparse Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving largescale problems. In addition, we discuss in detail a technique for overcoming numerical instability in Cholesky factorization at the endstage of iterations in interiorpoint algorithms. Keywords: Linear programming, PrimalDual infeasibleinteriorp...
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
Parallel InteriorPoint Solver for Structured Quadratic Programs: Application to Financial Planning Problems
, 2003
"... Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these charact ..."
Abstract

Cited by 41 (20 self)
 Add to MetaCart
Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming problems. Due to its objectoriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as
An interior algorithm for nonlinear optimization that combines line search and trust region steps
 Mathematical Programming 107
, 2006
"... An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization a ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
An interiorpoint method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primaldual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the Knitro [6, 28] software package and is extensively tested on a wide selection of test problems. 1
A specialized interiorpoint algorithm for multicommodity network flows
 SIAM J. on Optimization
, 1996
"... Abstract. Despite the efficiency shown by interiorpoint methods in largescale linear programming, they usually perform poorly when applied to multicommodity flow problems. The new specialized interiorpoint algorithm presented here overcomes this drawback. This specialization uses both a precondit ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Abstract. Despite the efficiency shown by interiorpoint methods in largescale linear programming, they usually perform poorly when applied to multicommodity flow problems. The new specialized interiorpoint algorithm presented here overcomes this drawback. This specialization uses both a preconditioned conjugate gradient solver and a sparse Cholesky factorization to solve a linear system of equations at each iteration of the algorithm. The ad hoc preconditioner developed by exploiting the structure of the problem is instrumental in ensuring the efficiency of the method. An implementation of the algorithm is compared to stateoftheart packages for multicommodity flows. The computational experiments were carried out using an extensive set of test problems, with sizes of up to 700,000 variables and 150,000 constraints. The results show the effectiveness of the algorithm.
Warm Start of the PrimalDual Method Applied in the CuttingPlane Scheme
 in the Cutting Plane Scheme, Mathematical Programming
, 1997
"... A practical warmstart procedure is described for the infeasible primaldual interiorpoint method employed to solve the restricted master problem within the cuttingplane method. In contrast to the theoretical developments in this field, the approach presented in this paper does not make the unreal ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
A practical warmstart procedure is described for the infeasible primaldual interiorpoint method employed to solve the restricted master problem within the cuttingplane method. In contrast to the theoretical developments in this field, the approach presented in this paper does not make the unrealistic assumption that the new cuts are shallow. Moreover, it treats systematically the case when a large number of cuts are added at one time. The technique proposed in this paper has been implemented in the context of HOPDM, the state of the art, yet public domain, interiorpoint code. Numerical results confirm a high degree of efficiency of this approach: regardless of the number of cuts added at one time (can be thousands in the largest examples) and regardless of the depth of the new cuts, reoptimizations are usually done with a few additional iterations. Key words. Warm start, primaldual algorithm, cuttingplane methods. Supported by the Fonds National de la Recherche Scientifique Su...
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Smoothed analysis of Renegar’s condition number for linear programming
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of linear programming is O(n 3 log(nd/σ)).
Sparse Numerical Linear Algebra: Direct Methods and Preconditioning
, 1996
"... Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be us ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Most of the current techniques for the direct solution of linear equations are based on supernodal or multifrontal approaches. An important feature of these methods is that arithmetic is performed on dense submatrices and Level 2 and Level 3 BLAS (matrixvector and matrixmatrix kernels) can be used. Both sparse LU and QR factorizations can be implemented within this framework. Partitioning and ordering techniques have seen major activity in recent years. We discuss bisection and multisection techniques, extensions to orderings to block triangular form, and recent improvements and modifications to standard orderings such as minimum degree. We also study advances in the solution of indefinite systems and sparse leastsquares problems. The desire to exploit parallelism has been responsible for many of the developments in direct methods for sparse matrices over the last ten years. We examine this aspect in some detail, illustrating how current techniques have been developed or ...
Adaptive Use of Iterative Methods in PredictorCorrector Interior Point Methods for Linear Programming
 NUMERICAL ALGORITHMS
, 1999
"... ..."