Results 1  10
of
33
Constraint Preconditioning for Indefinite Linear Systems
 SIAM J. Matrix Anal. Appl
, 2000
"... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..."
Abstract

Cited by 73 (10 self)
 Add to MetaCart
. The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in leastsquares, saddlepoint and Stokes problems. We attempt to solve the indenite linear system A B T B 0  {z } A x 1 x...
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
KNITRO: An integrated package for nonlinear optimization
 Large Scale Nonlinear Optimization, 35–59, 2006
, 2006
"... This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an activeset method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1
H.: A new conjugate gradient method with guaranteed descent and an efficient line search
 SIAM J. Optim
, 2005
"... Abstract. A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our sc ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Abstract. A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition gT k dk ≤ − 7 8 ‖gk‖2. Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions. A new line search scheme is developed that is efficient and highly accurate. Efficiency is achieved by exploiting properties of linear interpolants in a neighborhood of a local minimizer. High accuracy is achieved by using a convergence criterion, which we call the “approximate Wolfe ” conditions, obtained by replacing the sufficient decrease criterion in the Wolfe conditions with an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion. Numerical comparisons are given with both LBFGS and conjugate gradient methods using the unconstrained optimization problems in the CUTE library.
A new active set algorithm for box constrained Optimization
 SIAM Journal on Optimization
, 2006
"... Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established. For a nondegenerate stationary point, the algorithm eventually reduces to unconstrained optimization without restarts. Similarly, for a degenerate stationary point, where the strong secondorder sufficient optimality condition holds, the algorithm eventually reduces to unconstrained optimization without restarts. A specific implementation of the ASA is given which exploits the recently developed cyclic Barzilai–Borwein (CBB) algorithm for the gradient projection step and the recently developed conjugate gradient algorithm CG DESCENT for unconstrained optimization. Numerical experiments are presented using box constrained problems in the CUTEr and MINPACK2 test problem libraries. Key words. nonmonotone gradient projection, box constrained optimization, active set algorithm,
A survey of nonlinear conjugate gradient methods
 Pacific Journal of Optimization
, 2006
"... Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties. ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.
Parallel Gradient Distribution in Unconstrained Optimization
 SIAM Journal on Control and Optimization
, 1994
"... A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasiNewton or a conjugate gradient algorithm. Each processor can perform ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasiNewton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM5 mul...
Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent
 ACM Trans. Math. Softw
, 2006
"... Recently, a new nonlinear conjugate gradient scheme was developed which satisfies the descent condition gT kdk ≤ − 7 8 ‖gk‖2 and which is globally convergent whenever the line search fulfills the Wolfe conditions. This article studies the convergence behavior of the algorithm; extensive numerical t ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Recently, a new nonlinear conjugate gradient scheme was developed which satisfies the descent condition gT kdk ≤ − 7 8 ‖gk‖2 and which is globally convergent whenever the line search fulfills the Wolfe conditions. This article studies the convergence behavior of the algorithm; extensive numerical tests and comparisons with other methods for largescale unconstrained optimization are given.
Numerical methods for electronic structure calculations of materials
, 2006
"... The goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scienti£c computing audience. For this reason, we assume the r ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scienti£c computing audience. For this reason, we assume the reader does not have an extensive background in the related physics. Our overview focuses on the nature of the numerical problems to be solved, their origin, and on the methods used to solve the resulting linear algebra or nonlinear optimization problems. It is common knowledge that the behavior of matter at the nanoscale is, in principle, entirely determined by the Schrödinger equation. In practice, this equation in its original form is not tractable. Successful, but approximate, versions of this equation, which allow one to study nontrivial systems, took about £ve or six decades to develop. In particular, the last two decades saw a ¤urry of activity in developing effective software. One of the main practical variants of the Schrödinger equation is based on what is referred to as Density Functional Theory (DFT). The combination of DFT with pseudopotentials allows one to obtain in an ef£cient way the ground state con£guration for many materials. This article will emphasize pseudopotentialdensity
Active Set Strategies and the LP Dual Active Set Algorithm
, 1996
"... fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
fter a general treatment of primal and dual active set strategies, we present the Dual m Active Set Algorithm for linear programming and prove its convergence. An efficient impleentation is developed using proximal point approximations. This implementation involves a b primal/dual proximal iteration similar to one introduced by Rockafellar, and a new iteration ased on optimization of a proximal vector parameter. This proximal parameter optimization , w problem is well conditioned, leading to rapid convergence of the conjugate gradient method hile the original proximal function is terribly conditioned, leading to almost undetectable conz vergence of the conjugate gradient method. Limits as a proximal scalar parameter tends to ero are evaluated. Intriguing numerical results are presented for Netlib test problems. t s Key Words. Linear programming, quadratic programming, active sets, dual method, leas quares, proximal point, extrapolation, conjugate gradients, successive overrelexation ...