Results 1  10
of
36
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
KNITRO: An integrated package for nonlinear optimization
 Large Scale Nonlinear Optimization, 35–59, 2006
, 2006
"... This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an activeset method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1
A preconditioner for generalized saddle point problems
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. In this paper we consider the solution of linear systems of saddle point type by preconditioned Krylov subspace methods. A preconditioning strategy based on the symmetric/ skewsymmetric splitting of the coefficient matrix is proposed, and some useful properties of the preconditioned matri ..."
Abstract

Cited by 34 (26 self)
 Add to MetaCart
Abstract. In this paper we consider the solution of linear systems of saddle point type by preconditioned Krylov subspace methods. A preconditioning strategy based on the symmetric/ skewsymmetric splitting of the coefficient matrix is proposed, and some useful properties of the preconditioned matrix are established. The potential of this approach is illustrated by numerical
Weighted matchings for preconditioning symmetric indefinite linear systems
 SIAM J. Sci. Comput
, 2006
"... Abstract. Maximum weight matchings have become an important tool for solving highly indefinite unsymmetric linear systems, especially in direct solvers. In this study we investigate the benefit of reorderings and scalings based on symmetrized maximum weight matchings as a preprocessing step for inco ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Abstract. Maximum weight matchings have become an important tool for solving highly indefinite unsymmetric linear systems, especially in direct solvers. In this study we investigate the benefit of reorderings and scalings based on symmetrized maximum weight matchings as a preprocessing step for incomplete LDL T factorizations. The reorderings are constructed such that the matched entries form 1 × 1or2 × 2 diagonal blocks in order to increase the diagonal dominance of the system. During the incomplete factorization only tridiagonal pivoting is used. We report results for this approach and comparisons with other solution methods for a diverse set of symmetric indefinite matrices, ranging from nonlinear elasticity to interior point optimization.
Preconditioning KKT Systems
, 2002
"... This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric p ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric problems that cause difficulties for preconditioned iterative solvers, and where standard preconditioners, like incomplete factorizations, often fail. We experiment with nonsymmetric permutations and scalings aimed at placing large entries on the diagonal in the context of preconditioning for general sparse matrices. Our numerical experiments indicate that the reliability and performance of preconditioned iterative solvers are greatly enhanced by such preprocessing. Secondly, we present two new preconditioners for KKT systems. KKT systems arise in areas such as quadratic programming, sparse optimal control, and mixed finite element formulations. Our preconditioners approximate a constraint preconditioner with incomplete factorizations for the normal equations. Numerical experiments compare these two preconditioners with exact constraint preconditioning and the approach described above of permuting large entries to the diagonal. Finally, we turn to a specific problem area: sparse optimal control. Many optimal control problems are broken into several phases, and within a phase, most variables and constraints depend only on nearby variables and constraints. However, free initial and final times and timeindependent parameters impact variables and constraints throughout a phase, resulting in dense factored blocks in the KKT matrix. We drop fill due to these variables to reduce density within each phase. The resulting preconditioner is tightly banded and nearly block tridiagonal. Numerical experiments demonstrate that the preconditioners are effective, with very little fill in the factorization.
Approximate factorization constraint preconditioners for saddlepoint matrices
 SIAM J. Sci. Comput
"... Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Res ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Domain decomposition preconditioners for linear–quadratic elliptic optimal control problems
, 2004
"... ABSTRACT. We develop and analyze a class of overlapping domain decomposition (DD) preconditioners for linearquadratic elliptic optimal control problems. Our preconditioners utilize the structure of the optimal control problems. Their execution requires the parallel solution of subdomain linearquad ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
ABSTRACT. We develop and analyze a class of overlapping domain decomposition (DD) preconditioners for linearquadratic elliptic optimal control problems. Our preconditioners utilize the structure of the optimal control problems. Their execution requires the parallel solution of subdomain linearquadratic elliptic optimal control problems, which are essentially smaller subdomain copies of the original problem. This work extends to optimal control problems the application and analysis of overlapping DD preconditioners, which have been used successfully for the solution of single PDEs. We prove that for a class of problems the performance of the twolevel versions of our preconditioners is independent of the mesh size and of the subdomain size. 1.
BlockDiagonal and Constraint Preconditioners for Nonsymmetric Indefinite Linear Systems. Part I: Theory
, 2004
"... We study block diagonal preconditioners and an efficient variant of constraint preconditioners for general twobytwo block linear systems with zero (2,2) block. We derive block diagonal preconditioners from a splitting of the (1,1)block of the matrix. From the resulting preconditioned system we de ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We study block diagonal preconditioners and an efficient variant of constraint preconditioners for general twobytwo block linear systems with zero (2,2) block. We derive block diagonal preconditioners from a splitting of the (1,1)block of the matrix. From the resulting preconditioned system we derive a smaller, socalled related system that yields the solution of the original problem. Solving the related system corresponds to an ecient implementation of constraint preconditioning. We analyze the properties of both classes of preconditioned matrices, in particular their spectrum. Using analytical results we show that the related system matrix has the more favorable spectrum, which in many applications translates into faster convergence for Krylov subspace methods. We show that fast convergence depends mainly on the quality of the splitting, a topic for which a substantial body of theory exists. Our analysis also provides a number of new relations between blockdiagonal preconditioners and constraint preconditioners. For constrained problems, solving the related system produces iterates that satisfy the constraints exactly, just as for systems with a constraint preconditioner. Finally, for the Lagrange multiplier formulation of a constrained optimization problem we show how scaling nonlinear constraints can dramatically improve the convergence for linear systems in a Newton iteration. Our theoretical results are confirmed by numerical experiments on a constrained optimization problem. We consider the general...
Iterative solution of augmented systems arising in interior methods
 SIAM Journal on Optimization
"... Abstract. Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly illconditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the illconditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A oneparameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system, ” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugategradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a wellknown technique for preconditioning the conjugategradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent illconditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “activeset ” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.