Results 1 
6 of
6
Stability of the diagonal pivoting method with partial pivoting
 SIAM J. Matrix Anal. Appl
, 1995
"... Abstract. LAPACK and LINPACK both solve symmetric indefinite linear systems using the diagonal pivoting method with the partial pivoting strategy of Bunch and Kaufman [Math. Comp., 31 (1977), pp. 163–179]. No proof of the stability of this method has appeared in the literature. It is tempting to arg ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
Abstract. LAPACK and LINPACK both solve symmetric indefinite linear systems using the diagonal pivoting method with the partial pivoting strategy of Bunch and Kaufman [Math. Comp., 31 (1977), pp. 163–179]. No proof of the stability of this method has appeared in the literature. It is tempting to argue that the diagonal pivoting method is stable for a given pivoting strategy if the growth factor is small. We show that this argument is false in general and give a sufficient condition for stability. This condition is not satisfied by the partial pivoting strategy because the multipliers are unbounded. Nevertheless, using a more specific approach we are able to prove the stability of partial pivoting, thereby filling a gap in the body of theory supporting LAPACK and LINPACK.
Computing a Search Direction for LargeScale LinearlyConstrained Nonlinear Optimization Calculations
, 1993
"... . We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ens ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ensure global convergence of the underlying optimization methods. 1 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 2 IANCNR, c/o Dipartimento di Matematica, 209, via Abbiategrasso 27100 Pavia, Italy 3 Department of Mathematics, University of California, 405 Hilgard Avenue, Los Angeles, CA 900241555, USA 4 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England 5 Current reports available by anonymous ftp from the directory "pub/reports" on camelot.cc.rl.ac.uk (internet 130.246.8.61) Keywords: Largescale problems, unconstrained optimization, linearly constrained optimization, direct methods, iterative...
An iterative workingset method for LargeScale NonConvex quadratic programming
, 2001
"... We consider a workingset method for solving largescale quadratic programming problems for which there is no requirement that the objective function be convex. The methods are iterative at two levels, one level relating to the selection of the current working set, and the second due to the method u ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider a workingset method for solving largescale quadratic programming problems for which there is no requirement that the objective function be convex. The methods are iterative at two levels, one level relating to the selection of the current working set, and the second due to the method used to solve the equalityconstrained problem for this working set. A preconditioned conjugate gradient method is used for this inner iteration, with the preconditioner chosen especially to ensure feasibility of the iterates. The preconditioner is updated at the conclusion of each outer iteration to ensure that this feasibility requirement persists. The wellknown equivalence between the conjugategradient and Lanczos methods is exploited when nding directions of negative curvature. Details of an implementation  the Fortran 90 package QPA in the forthcoming GALAHAD library  are given.
Iterative Methods for IllConditioned Linear Systems From Optimization
, 1998
"... Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restr ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restricted form of iterative refinement. Numerical results illustrate the approaches considered. 1 Email : n.gould@rl.ac.uk 2 Current reports available from "http://www.rl.ac.uk/departments/ccd/numerical/reports/reports.html". Department for Computation and Information Atlas Centre Rutherford Appleton Laboratory Oxfordshire OX11 0QX August 26, 1998. 1 INTRODUCTION 1 1 Introduction Let A and H be, respectively, fullrank m by n (m n) and symmetric n by n real matrices. Suppose furthermore that any nonzero coefficients in this data are modest, that is the data is O(1). (1) We consider the iterative solution of the linear system (H +A T D \Gamma1 A)x = b (1.1) where b is modest an...
Numerical Methods for LargeScale NonConvex Quadratic Programming
, 2001
"... We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a goo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a good estimate of the optimal active set can be predicted. The second is an interiorpoint trustregion type, and has proved capable of solving problems involving up to half a million unknowns and constraints. The solution of a key equality constrained subproblem, common to both methods, is described. The results of comparative tests on a large set of convex and nonconvex quadratic programming examples are given.
Departments Of Mathematics
"... The numerical analyst should be motivated by the need to compute (numerical) solutions to realistic models of reallife processes. On many occasions, phenomena are modelled by ordinary differential equations when equations that incorporate an aftereffect or delay can provide more realistic models. ..."
Abstract
 Add to MetaCart
The numerical analyst should be motivated by the need to compute (numerical) solutions to realistic models of reallife processes. On many occasions, phenomena are modelled by ordinary differential equations when equations that incorporate an aftereffect or delay can provide more realistic models. One is led to consider such problems as y 0 (t) = F i t; y(t); y(ff(t; y(t))) j (t t 0 ); y(t) = /(t); (t t 0 ) wherein ff(t; y(t)) t, or y 0 (t) = G i t; R t \Gamma1 K(t; s; y(t); y(s))ds j (t t 0 ); y(t) = /(t); (t t 0 ) (and systems of similar equations). The above retarded functional differential equations are examples of causal orVolterra equations, and we shall refer to some concrete cases that relate to real phenomena. A classical study of the analytical solution of equations such as those above concentrates on existence and uniqueness of solutions (and the theory depends on the type of memory properties in the equation). But when these issues are resolved, the iss...