Results 1 
8 of
8
Perturbation Analyses for the QR Factorization
 SIAM J. Matrix Anal. Appl
, 1997
"... This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any c ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
(Show Context)
This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any column pivoting used in AP = Q1R, and the condition numbers for R are bounded for a fixed n when the standard column pivoting strategy is used. This strategy tends to improve the condition of Q 1 , so the computed Q 1 and R will probably both have greatest accuracy when we use the standard column pivoting strategy. First order normwise perturbation analyses are given for both Q 1 and R. It is seen that the analysis for R may be approached in two ways  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute and not very intuitive, and a perhaps simpler "matrix equation" analysis which provides results that are usually weaker but easier to interpret, and which allow efficient computation of a satisfactory estimate for the true condition number. Key Words. QR factorization, perturbation analysis, condition estimation, matrix equations, pivoting AMS Subject Classifications: 15A23, 65F35 1.
New Perturbation Analyses For The Cholesky Factorization
, 1995
"... this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the pertu ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the perturbation. In the remainder of this section we review some useful tools and results by showing one way of obtaining the first order normwise perturbation bound given by Sun (1991) and Stewart (1993) for the Cholesky factor. Theorem 1 Let A 2 R n\Thetan be symmetric positive definite, with the Cholesky factorization A = R T R. Let \DeltaA 2 R n\Thetan be symmetric. If ffl j k\DeltaAk F =kAk 2 satisfies 2 (A)ffl ! 1; (1) where 2 (A) j kAk 2 kA \Gamma1 k 2 , then A+ \DeltaA has the Cholesky factorization A+ \DeltaA = (R + \DeltaR) T (R + \DeltaR); where k\DeltaRk F kRk 2
Componentwise Perturbation Analyses for the QR Factorization
"... This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, Q T Q = I, R upper triangular, for a given real m n matrix A of rank n. Such specic analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, Q T Q = I, R upper triangular, for a given real m n matrix A of rank n. Such specic analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given for both Q and R
Sensitivity Analyses for Factorizations of Sparse or Structured Matrices
, 1998
"... For a unique factorization of a matrix B, the effect of sparsity or other structure on measuring the sensitivity of the factors of B to some change G in B is considered. In particular, normbased analyses of the QR and Cholesky factorizations are examined. If B is structured but G is not, it is sho ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
For a unique factorization of a matrix B, the effect of sparsity or other structure on measuring the sensitivity of the factors of B to some change G in B is considered. In particular, normbased analyses of the QR and Cholesky factorizations are examined. If B is structured but G is not, it is shown that the expressions for the condition numbers are identical to those when B is not structured, but because of the structure the condition numbers may be easier to estimate. If G is structured, whether B is or not, then the expressions for the condition numbers can change, and it is shown how to derive the new expressions. Cases where B and G have the same sparsity structure occur often: here, for the QR factorization an example shows the value of the new expression can be arbitrarily smaller, but for the Cholesky factorization of a tridiagonal matrix and perturbation the value of the new expression cannot be significantly different from the value of the old one. Thus taking account of sparsity can show the condition is much better than would be suggested by ignoring it, but only for some classes of problems, and perhaps only for some types of factorization. The generalization of these ideas to other factorizations is discussed.
Perturbation Analyses for the Cholesky Downdating Problem
, 1996
"... New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an up ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an upper triangular matrix. For both cases, first order perturbation bounds for the downdated Cholesky factor U are given using two approaches  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute, and a simpler "matrixequation" analysis which provides results that are weaker but easier to compute or estimate. The analyses more accurately reflect the sensitivity of the problem than previous results. As X ! 0, the asymptotic values of the new condition numbers for case (1) have bounds that are independent of 2 (R) if R was found using the standard pivoting strategy in the Cholesky factorization, and the asymptotic values of the new condition numbers for case (2) are unity. Simple reasoning shows this last result must be true for the sensitivity of the problem, but previous condition numbers did not exhibit this. Key Words. perturbation analysis, sensitivity, condition, asymptotic condition, Cholesky factorization, downdating AMS Subject Classifications: 15A23, 65F35 1.
Mathematik Componentwise perturbation analyses
, 1999
"... Summary. This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, QTQ = I, R upper triangular, for a given real m × n matrix A of rank n. Suchspecific analyses are important for example when the columns of A are badly scaled. First order perturbation bounds ar ..."
Abstract
 Add to MetaCart
(Show Context)
Summary. This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, QTQ = I, R upper triangular, for a given real m × n matrix A of rank n. Suchspecific analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given for both Q and R. The analyses more accurately reflect the sensitivity of the problem than previous such results. The condition number for R is bounded for a fixed n when the standard column pivoting strategy is used. This strategy also tends to improve the condition of Q, so usually the computed Q and R will both have higher accuracy when we use the standard column pivoting strategy. Practical condition estimators are derived. The assumptions on the form of the perturbation ∆A are explained and extended. Weaker rigorous bounds are also given. Mathematics Subject Classification (1991): 15A23, 65F35 1
Algorithms, Certification, and CryptographyTable of contents
"... 6.2.1. Mixedprecision fused multiplyandadd 11 6.2.2. Multiplication by rational constants versus division by a constant 11 6.2.3. Floatingpoint exponentiation on FPGA 11 6.2.4. Arithmetic around the bit heap 11 6.2.5. Improving computing architectures 11 ..."
Abstract
 Add to MetaCart
(Show Context)
6.2.1. Mixedprecision fused multiplyandadd 11 6.2.2. Multiplication by rational constants versus division by a constant 11 6.2.3. Floatingpoint exponentiation on FPGA 11 6.2.4. Arithmetic around the bit heap 11 6.2.5. Improving computing architectures 11
AND
, 1995
"... We present new perturbation analyses, for the Cholesky factorization A = RJR of a symmetric positive definite matrix A. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any symmetric pivoting used in PAP1 = ..."
Abstract
 Add to MetaCart
We present new perturbation analyses, for the Cholesky factorization A = RJR of a symmetric positive definite matrix A. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any symmetric pivoting used in PAP1 = RTR, and both numerical results and an analysis show that the standard method of pivoting is optimal in that it usually leads to a condition number very close to its lower limit for any given A. It follows that the computed R will probably have greatest accuracy when we use the standard symmetric pivoting strategy. Initially we give a thorough analysis to obtain both firstorder and strict normwise perturbation bounds which are as tight as possible, leading to a definition of an optimal condition number for the problem. Then we use this approach to obtain reasonably clear firstorder and strict componentwise perturbation bounds. We complete the work by giving a much simpler normwise analysis which provides a somewhat weaker bound, but which allows us to estimate the condition of the problem quite well with an efficient computation. This simpler analysis also shows why the factorization is often less sensitive than we previously thought, and adds further insight into why pivoting usually gives such good results. We derive a useful upper bound on the condition of the problem when we use pivoting. 1.