Results 1 
8 of
8
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 180 (30 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Computing the Generalized Singular Value Decomposition
 SIAM J. Sci. Comput
, 1991
"... We present a variation of Paige's algorithm for computing the generalized singular value decomposition (GSVD) of two matrices A and B. There are two innovations. The first is a new preprocessing step which reduces A and B to upper triangular forms satisfying certain rank conditions. The second is a ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We present a variation of Paige's algorithm for computing the generalized singular value decomposition (GSVD) of two matrices A and B. There are two innovations. The first is a new preprocessing step which reduces A and B to upper triangular forms satisfying certain rank conditions. The second is a new 2 \Theta 2 triangular GSVD algorithm, which constitutes the inner loop of Paige's algorithm. We present proofs of stability and high accuracy of the 2 \Theta 2 GSVD algorithm, and demonstrate it using examples on which all previous algorithms fail. 1 Introduction The purpose of this paper is to describe a variation of Paige's algorithm [28] for computing the following generalized singular value decomposition (GSVD) introduced by Van Loan [33], and Paige and Saunders [25]. This is also called the quotient singular value decomposition (QSVD) in [8]. Theorem 1.1 Let A 2 IR m\Thetan and B 2 IR p\Thetan have rank(A T ; B T ) = n. 1 Then there are orthogonal matrices U , V and Q su...
The CSD, GSVD, their Applications and Computations
 University of Minnesota
, 1992
"... Since the CS decomposition (CSD) and the generalized singular value decomposition (GSVD) emerged as the generalization of the singular value decomposition about fifteen years ago, they have been proved to be very useful tools in numerical linear algebra. In this paper, we review the theoretical and ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Since the CS decomposition (CSD) and the generalized singular value decomposition (GSVD) emerged as the generalization of the singular value decomposition about fifteen years ago, they have been proved to be very useful tools in numerical linear algebra. In this paper, we review the theoretical and numerical development of the decompositions, discuss some of their applications and present some new results and observations. We also point out some open problems. A Fortran 77 code has been written that computes the CSD and the GSVD. Keywords: singular value decomposition, CS decomposition, generalized singular value decomposition. Subject Classifications: AMS(MOS): 65F30; CR:G1.3 1 Introduction The singular value decomposition (SVD) of a matrix is one of the most important tools in numerical linear algebra. It has been widely used in scientific computing. Recently, Stewart [52] gave an excellent survey on the early history of the SVD back to the contributions of E. Beltrami and C. Jord...
Accuracy and Stability of the Null Space Method for Solving the Equality Constrained Least Squares Problem
 BIT
, 1999
"... The null space method is a standard method for solving the linear least squares problem subject to equality constraints (the LSE problem). We show that three variants of the method, including one used in LAPACK that is based on the generalized QR factorization, are numerically stable. We derive two ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The null space method is a standard method for solving the linear least squares problem subject to equality constraints (the LSE problem). We show that three variants of the method, including one used in LAPACK that is based on the generalized QR factorization, are numerically stable. We derive two perturbation bounds for the LSE problem: one of standard form that is not attainable, and a bound that yields the condition number of the LSE problem to within a small constant factor. By combining the backward error analysis and perturbation bounds we derive an approximate forward error bound suitable for practical computation. Numerical experiments are given to illustrate the sharpness of this bound. Key words: Constrained least squares problem, null space method, rounding error analysis, condition number, generalized QR factorization, LAPACK
Multifrontal Computation with the Orthogonal Factors of Sparse Matrices
 SIAM Journal on Matrix Analysis and Applications
, 1994
"... . This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the righthand side vector b to Q T b. A multifrontalbased method for computing Q T b using Householder factorization is presented. A theoretical operation count for the K by K unbordered grid model problem and problems defined on graphs with p nseparators shows that the proposed method requires O(NR ) storage and multiplications to compute Q T b, where NR = O(n log n) is the number of nonzeros of the upper triangular factor R of A. In order to introduce BLAS2 operations, Schreiber and Van Loan's StorageEfficientWY Representation [SIAM J. Sci. Stat. Computing, 10(1989),pp. 5557] is applied for the orthogonal factor Q i of each frontal matrix F i . If this technique is used, the bound on storage increases to O(n(logn) 2 ). Some numerical results for the grid model problems as well as HarwellBoeing problems...
Backward Error Bounds for Constrained Least Squares Problems
, 1999
"... . We derive an upper bound on the normwise backward error of an approximate solution to the equality constrained least squares problem minBx=d kb \Gamma Axk2 . Instead of minimizing over the four perturbations to A, b, B and d, we fix those to B and d and minimize over the remaining two; we obtain ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
. We derive an upper bound on the normwise backward error of an approximate solution to the equality constrained least squares problem minBx=d kb \Gamma Axk2 . Instead of minimizing over the four perturbations to A, b, B and d, we fix those to B and d and minimize over the remaining two; we obtain an explicit solution of this simplified minimization problem. Our experiments show that backward error bounds of practical use are obtained when B and d are chosen as the optimal normwise relative backward perturbations to the constraint system, and we find that when the bounds are weak they can be improved by direct search optimization. We also derive upper and lower backward error bounds for the problem of least squares minimization over a sphere: min kxk 2 ff kb \Gamma Axk2 . Key words: Equality constrained least squares problem, least squares minimization over a sphere, null space method, elimination method, method of weighting, backward error, backward stability AMS subject classific...
On the Weighting Method for Least Squares Problems with Linear Equality Constraints
, 1997
"... The weighting method for solving a least squares problem with linear equality constraints multiplies the constraints by a large number and appends them to the top of the least squares problem, which is then solved by standard techniques. In this paper we give a new analysis of the method, based on t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The weighting method for solving a least squares problem with linear equality constraints multiplies the constraints by a large number and appends them to the top of the least squares problem, which is then solved by standard techniques. In this paper we give a new analysis of the method, based on the QR decomposition, that exhibits many features of the algorithm. In particular it suggests a natural criterion for chosing the weighting factor. This report is available by anonymous ftp from thales.cs.umd.edu in the directory pub/reports or through the web at http://www.cs.umd.edu/ stewart/. y Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742. This work was supported in part by the National Science Foundation under grant CCR 95503126. The Weighting Method 1 On the Weighting Method for Least Squares Problems with Linear Equality Constraints G. W. Stewart ABSTRACT The weighting method for solving a least square...
RowWise Backward Stable Elimination Methods for the Equality Constrained Least Squares Problem
 Manchester Centre for Computational Mathematics
, 1999
"... . It is well known that the solution of the equality constrained least squares (LSE) problem minBx=d #b  Ax# 2 is the limit of the solution of the unconstrained weighted least squares problem min x h d b i  h B A i x 2 as the weight tends to infinity, assuming that [ B T A ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. It is well known that the solution of the equality constrained least squares (LSE) problem minBx=d #b  Ax# 2 is the limit of the solution of the unconstrained weighted least squares problem min x h d b i  h B A i x 2 as the weight tends to infinity, assuming that [ B T A T ] T has full rank. We derive a method for the LSE problem by applying Householder QR factorization with column pivoting to this weighted problem and taking the limit analytically, with an appropriate rescaling of rows. The method obtained is a type of direct elimination method. We adapt existing error analysis for the unconstrained problem to obtain a rowwise backward error bound for the method. The bound shows that, provided row pivoting or row sorting is used, the method is wellsuited to problems in which the rows of A and B vary widely in norm. As a byproduct of our analysis, we derive a rowwise backward error bound of precisely the same form for the standard elimination m...