Results 1 
9 of
9
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Improved error bounds for underdetermined system solvers
 SIAM J. Matrix Anal. Appl
, 1993
"... The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both me ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both methods produce computed solutions whose normwise relative error is bounded to rst order by c 2(A)u, where c is a constant depending on the dimensions of A, 2(A) = kA + k2kAk2 is the 2norm condition number, and u is the unit roundo. We show that these error bounds can be strengthened by replacing 2(A) by the potentially much smaller quantity cond2(A) = kjA + j jAjk2, which isinvariant under row scaling of A. We also show that cond2(A) re ects the sensitivity of the minimum norm solution x to rowwise relative perturbations in the data A and b. For square linear systems Ax = b row equilibration is shown to endow
Perturbation analysis for block downdating of a Cholesky decomposition
, 1994
"... this paper, we assume that the data matrix at any stage has full rank, rank(X) = rank( ~ X) = n: ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
this paper, we assume that the data matrix at any stage has full rank, rank(X) = rank( ~ X) = n:
Updating the QR Factorization and the Least Squares Problem
"... In this paper we treat the problem of updating the QR factorization, with applications to the least squares problem. Algorithms are presented that compute the factorization Ã = ˜ Q ˜ R where Ã is the matrix A = QR after it has had a number of rows or columns added or deleted. This is achieved by up ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we treat the problem of updating the QR factorization, with applications to the least squares problem. Algorithms are presented that compute the factorization Ã = ˜ Q ˜ R where Ã is the matrix A = QR after it has had a number of rows or columns added or deleted. This is achieved by updating the factors Q and R, and we show this can be much faster than computing the factorization of Ã from scratch. We consider algorithms that exploit the Level 3 BLAS where possible and place no restriction on the dimensions of A or the number of rows and columns added or deleted. For some of our algorithms we present Fortran 77 LAPACKstyle code and show the backward error of our updated factors is comparable to the error bounds of the QR factorization of Ã.
Perturbation Analyses for the Cholesky Downdating Problem
, 1996
"... New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an up ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an upper triangular matrix. For both cases, first order perturbation bounds for the downdated Cholesky factor U are given using two approaches  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute, and a simpler "matrixequation" analysis which provides results that are weaker but easier to compute or estimate. The analyses more accurately reflect the sensitivity of the problem than previous results. As X ! 0, the asymptotic values of the new condition numbers for case (1) have bounds that are independent of 2 (R) if R was found using the standard pivoting strategy in the Cholesky factorization, and the asymptotic values of the new condition numbers for case (2) are unity. Simple reasoning shows this last result must be true for the sensitivity of the problem, but previous condition numbers did not exhibit this. Key Words. perturbation analysis, sensitivity, condition, asymptotic condition, Cholesky factorization, downdating AMS Subject Classifications: 15A23, 65F35 1.
A weakly stable algorithm for general Toeplitz systems
 Numerical Algorithms
, 1995
"... We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R T R is close to A T A. Thus, when the algorithm is used to solve the seminormal equations R T Rx = A T b, we obtain a weakly stable method for the solution of a nonsingular T ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R T R is close to A T A. Thus, when the algorithm is used to solve the seminormal equations R T Rx = A T b, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the fullrank Toeplitz or Hankel least squares problem min �Ax − b�2.
IMPROVED ERRORBOUNDS FOR
"... Abstract. The minimal 2norm solution to an underdetermined system Ax b of full rank can be computed using a QR factorization ofAT in two different ways. One method requires storage and reuse of the orthogonal matrix Q, while the method of seminormal equations does not. Existing error analyses show ..."
Abstract
 Add to MetaCart
Abstract. The minimal 2norm solution to an underdetermined system Ax b of full rank can be computed using a QR factorization ofAT in two different ways. One method requires storage and reuse of the orthogonal matrix Q, while the method of seminormal equations does not. Existing error analyses show that both methods produce computed solutions whose normwise relative error is bounded to first order by ca2(A)u, where c is a constant depending on the dimensions of A, 2(A) IIA+II211AII2 is the 2norm condition number, and u is the unit roundoff. It is shown that these error bounds can be strengthened by replacing 2(A) by the potentially much smaller quantity cond2(A) IA+I IA1112, which is invariant under row scaling of A. It is also shown that cond2(A) reflects the sensitivity ofthe minimum norm solution x to rowwise relative perturbations in the data A and b. For square linear systems Ax b row equilibration is shown to endow solution methods based on LU or QR factorization ofA with relative error bounds proportional to condo(A), just as when aQR factorization ofA T is used. The advantages of using fixed precision iterative refinement in this context instead of row equilibration are explained. Key words, underdetermined system, seminormal equations, QR factorization, rounding error analysis, backward error, componentwise error bounds, iterative refinement, row scaling
NorthHolland Publishing Company MATRIX FACTOR1ZATIONS IN OPTIMIZATION OF NON LINEAR FUNCTIONS SUBJECT TO LINEAR CONSTRAINTS*
, 1974
"... Several ways of implementing methods for solving nonlinear optimization problems involving linear inequality and equality constraints using numerically stable matrix factorizations are described. The methods considered all follow an active constraint set approach and include quadratic programming, v ..."
Abstract
 Add to MetaCart
Several ways of implementing methods for solving nonlinear optimization problems involving linear inequality and equality constraints using numerically stable matrix factorizations are described. The methods considered all follow an active constraint set approach and include quadratic programming, variable metric, and modified Newton methods. 1.