Results 1  10
of
13
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
A Class of Preconditioners for Weighted Least Squares Problems
, 1999
"... We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a Cholesky factorization of a weighted normal equation coefficient matrix with the previous weight. Key Words. Weighted linear least squares, Preconditioners, Preconditioned conjugate gradient for least squares, Linear programming, Primaldual infeasibleinteriorpoint algorithms. 1 Introduction In this paper, we present a class of preconditioners based on low rank corrections to the Cholesky factorization of a weighted normal equation coefficient matrix. This class of preconditioners leads to good performance for interiorpoint methods for linear programming. Particularly, we have implemented primaldual Newton method to test this class of preconditioners. The numerical results on large scale...
Computational Issues for a New Class of Preconditioners
, 1999
"... this paper we consider solving a sequence of weighted linear least squares problems where the only changes from one problem to the next are the weights and the right hand side (or data). We alternate between iterative and direct methods to solve the normal equations for the least squares problems. T ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
this paper we consider solving a sequence of weighted linear least squares problems where the only changes from one problem to the next are the weights and the right hand side (or data). We alternate between iterative and direct methods to solve the normal equations for the least squares problems. The direct method is the Cholesky factorization. For the iterative method we discuss a class of preconditioners based on a low rank correction of a Cholesky factorization of a weighted normal equation coefficient matrix. Different ways to compute the preconditioner are given. Further, a sparse algorithm for modifying the Cholesky factors by a low rank matrix is derived.
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
, 2007
"... 1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented syste ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hypersparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and largescale problems from public domain test collections. Computational experience is encouraging.
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship. 1 Introduction The work in any continuous optimization algorithm neatly partitions into two pieces: the work in acquiring information through evaluation of the function and perhaps its derivatives, and the overhead involved in generating points approximating an optimal point. More often than not, this second part of the work is dominated by linear algebra, usually in the form of solution of a linear system or least squares problem and updating of matrix information. Thus, members of the optim...
Adaptive Constraint Reduction for Convex Quadratic Programming and Training Support Vector Machines
, 2008
"... Convex quadratic programming (CQP) is an optimization problem of minimizing a convex quadratic objective function subject to linear constraints. We propose an adaptive constraint reduction primaldual interiorpoint algorithm for convex quadratic programming with many more constraints than variables ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Convex quadratic programming (CQP) is an optimization problem of minimizing a convex quadratic objective function subject to linear constraints. We propose an adaptive constraint reduction primaldual interiorpoint algorithm for convex quadratic programming with many more constraints than variables. We reduce the computational effort by assembling the normal equation matrix with a subset of the constraints. Instead of the exact matrix, we compute an approximate matrix for a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine scaling variant. A similar approach can be applied to Mehrotra’s predictorcorrector type algorithms. An example of CQP arises in training a linear support vector machine (SVM), which is a popular tool for pattern recognition. The difficulty in training a supportvector machine (SVM) lies in the typically vast number of patterns used for the training process. In this work, we propose an adaptive constraint reduction primaldual interiorpoint method for training the linear SVM with l1 hinge loss. We reduce the computational effort by assembling the normal equation matrix with a subset of wellchosen patterns. Starting with a large portion of the patterns, our proposed scheme excludes more and more unnecessary patterns as the iteration proceeds. We extend our approach to training nonlinear SVMs through Gram matrix approximation methods. Promising numerical results are reported.
Properties and Computational Issues of a Preconditioner for Interior Point Methods
, 1999
"... This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between a direct and an iterative method. The preconditioner is based on lowrank modifications of the coefficient matrix where a direct solution technique has been used. We compare two different techniques of forming the lowrank modification matrix; namely one by Wang and O'Leary [11] and the other by Baryamureeba, Steihaug and Zhang [3]. The theory and numerical testing strongly support the latter. We derive a sparse algorithm for modifying the Cholesky factors by a lowrank matrix, discuss the computational issues of this preconditioner, and finally give numerical results that show the approach of alternating between a direct and an iterative method to be promising. Key Words. Linear Programmi...
A POLYNOMIALTIME INTERIORPOINT METHOD FOR CONIC OPTIMIZATION, WITH INEXACT BARRIER EVALUATIONS ∗
"... Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if approximate gradients and Hessians of the primal barrier function can be computed, and the relative errors in such quantities are not too large, then the method has polynomial worstcase iteration complexity. (In particular, polynomial iteration complexity ensues when the gradient and Hessian are evaluated exactly.) In addition, the algorithm requires no evaluation—or even approximate evaluation—of quantities related to the barrier function for the dual cone, even for problems in which the underlying cone is not selfdual.
Properties of a Class of Preconditioners for Weighted Least Squares Problems
, 1999
"... A sequence of weighted linear least squares problems arises from interiorpoint methods for linear programming where the changes from one problem to the next are the weights and the right hand side. One approach for solving such a weighted linear least squares problem is to apply a preconditioned co ..."
Abstract
 Add to MetaCart
A sequence of weighted linear least squares problems arises from interiorpoint methods for linear programming where the changes from one problem to the next are the weights and the right hand side. One approach for solving such a weighted linear least squares problem is to apply a preconditioned conjugate gradient method to the normal equations where the preconditioner is based on a lowrank correction to the Cholesky factorization of a previous coefficient matrix. In this paper, we establish theoretical results for such preconditioners that provide guidelines for the construction of preconditioners of this kind. We also present preliminary numerical experiments to validate our theoretical results and to demonstrate the effectiveness of this approach. Key Words. Weighted linear least squares, Preconditioner, Preconditioned conjugate gradient method, Linear programming, interiorpoint algorithms. This is Technical Report No. 170, Department of Informatics, University of Bergen, N50...
Application of a New Class of Preconditioners to LargeScale Linear Programming Problems
"... . In every primaldual interior point method, a sequence of weighted linear least squares problems are solved, where the only change from one problem to the next are the weights and the right hand side (or data). We propose a mixed primaldual method where we solve the weighted least squares problem ..."
Abstract
 Add to MetaCart
. In every primaldual interior point method, a sequence of weighted linear least squares problems are solved, where the only change from one problem to the next are the weights and the right hand side (or data). We propose a mixed primaldual method where we solve the weighted least squares problems with a direct method for every even interior point iteration and an iterative method for every odd iteration. A class of preconditioners based on a low rank correction of a Cholesky factorization of a weighted normal equation coefficient matrix is introduced. Key Words. Weighted linear least squares, Preconditioners, Preconditioned conjugate gradient for least squares, Linear programming, Primaldual infeasible interior point algorithms. 1 Introduction An interior point algorithm solves the linear programming problem by generating a sequence of interior points from an initial interior point. At every interior point iteration weighted linear least squares problems are solved. The new class...