Results 1  10
of
13
Constraint Preconditioning for Indefinite Linear Systems
 SIAM J. Matrix Anal. Appl
, 2000
"... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..."
Abstract

Cited by 81 (12 self)
 Add to MetaCart
(Show Context)
. The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in leastsquares, saddlepoint and Stokes problems. We attempt to solve the indenite linear system A B T B 0  {z } A x 1 x...
KNITRO: An integrated package for nonlinear optimization
 Large Scale Nonlinear Optimization, 35–59, 2006
, 2006
"... This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems ..."
Abstract

Cited by 52 (3 self)
 Add to MetaCart
(Show Context)
This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an activeset method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
A Preconditioned Conjugate Gradient Approach to Linear Equality Constrained Minimization
, 2000
"... We propose a new framework for the application of preconditioned conjugate gradients in the solution of largescale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We propose a new framework for the application of preconditioned conjugate gradients in the solution of largescale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the fact that the reduced system may be dense). Numerical experiments performed on a variety of test problems from the Netlib LP collection indicate computational promise.
Structured Automatic Differentiation
, 1998
"... eme which combines the forward and reverse modes of AD. Problem structure can be viewed in many di#erent ways; one way is to look at the granularity of the operations involved. For example, di#erentiation carried out at the matrixvector operations can lead to great savings in the time as well as sp ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
eme which combines the forward and reverse modes of AD. Problem structure can be viewed in many di#erent ways; one way is to look at the granularity of the operations involved. For example, di#erentiation carried out at the matrixvector operations can lead to great savings in the time as well as space requirements. Figuring out the kind of computation is another way to view structure, e.g., partially separable or composite functions whose structure can be exploited to get performance gains. In this thesis we develop a general structure framework which can be viewed hierarchically and allows for structure exploitation at various levels. For example, for time integration schemes employing stencils it is possible to exploit structure at both the stencil level and the timestep level. We also present some advanced structure exploitation ideas, e.g., parallelism in structured computations and using structure in implicit computations. The use of AD as a derivative computing e
Iterative Linear Algebra for Constrained Optimization
, 2005
"... Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these probl ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these problems; this leads to iterative solvers being the only viable alternative. In this thesis we consider iterative methods for solving saddle point systems and show that a projected preconditioned conjugate gradient method can be applied to these indefinite systems. Such a method requires the use of a specific class of preconditioners, (extended) constraint preconditioners, which exactly replicate some parts of the saddle point system that we wish to solve. The standard method for using constraint preconditioners, at least in the optimization community, has been to choose the constraint
Numerical Methods for LargeScale NonConvex Quadratic Programming
, 2001
"... We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a goo ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a good estimate of the optimal active set can be predicted. The second is an interiorpoint trustregion type, and has proved capable of solving problems involving up to half a million unknowns and constraints. The solution of a key equality constrained subproblem, common to both methods, is described. The results of comparative tests on a large set of convex and nonconvex quadratic programming examples are given.
unknown title
, 2004
"... Combining direct and iterative methods for the solution of large systems in different application areas 1 ..."
Abstract
 Add to MetaCart
(Show Context)
Combining direct and iterative methods for the solution of large systems in different application areas 1
unknown title
"... Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing ty ..."
Abstract
 Add to MetaCart
(Show Context)
Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing types of constraints but also more complex logical combinations beyond conjunctions. We first show how any statement in conjunctive normal form (CNF) can be represented as a linear inequality. Since existing clustering formulations such as spectral clustering cannot easily incorporate these linear inequalities, we propose a quadratic programming (QP) clustering formulation to accommodate them. This new formulation allows us to have much more complex guidance in clustering. We demonstrate the effectiveness of our approach in two applications on text and personal information management. We also compare our algorithm against existing constrained spectral clustering algorithm to show its efficiency in computational time.