Results 1 
5 of
5
Analysis of the Cholesky decomposition of a semidefinite matrix
 in Reliable Numerical Computation
, 1990
"... Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward er ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
(Show Context)
Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward error analysis is given; it shows that the computed Cholesky factors are the exact ones of a matrix whose distance from A is bounded by 4r(r + 1) � �W �2+1 � 2 u�A�2+O(u 2), where u is the unit roundoff. For the complete pivoting strategy it is shown that �W � 2 2 ≤ 1 3 (n −r)(4r −1), and empirical evidence that �W �2 is usually small is presented. The overall conclusion is that the Cholesky algorithm with complete pivoting is stable for semidefinite matrices. Similar perturbation results are derived for the QR decomposition with column pivoting and for the LU decomposition with complete pivoting. The results give new insight into the reliability of these decompositions in rank estimation. Key words. Cholesky decomposition, positive semidefinite matrix, perturbation theory, backward error analysis, QR decomposition, rank estimation, LINPACK.
New Perturbation Analyses For The Cholesky Factorization
, 1995
"... this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the pertu ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the perturbation. In the remainder of this section we review some useful tools and results by showing one way of obtaining the first order normwise perturbation bound given by Sun (1991) and Stewart (1993) for the Cholesky factor. Theorem 1 Let A 2 R n\Thetan be symmetric positive definite, with the Cholesky factorization A = R T R. Let \DeltaA 2 R n\Thetan be symmetric. If ffl j k\DeltaAk F =kAk 2 satisfies 2 (A)ffl ! 1; (1) where 2 (A) j kAk 2 kA \Gamma1 k 2 , then A+ \DeltaA has the Cholesky factorization A+ \DeltaA = (R + \DeltaR) T (R + \DeltaR); where k\DeltaRk F kRk 2
QR VERSUS CHOLESKY: A PROBABILISTIC ANALYSIS
"... Abstract. Least squares solutions of linear equations Ax = b are very important for parameter estimation in engineering, applied mathematics, and statistics. There are several methods for their solution including QR decomposition, Cholesky decomposition, singular value decomposition (SVD), and Kryl ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Least squares solutions of linear equations Ax = b are very important for parameter estimation in engineering, applied mathematics, and statistics. There are several methods for their solution including QR decomposition, Cholesky decomposition, singular value decomposition (SVD), and Krylov subspace methods. The latter methods were developed for sparse A matrices that appear in the solution of partial differential equations. The QR (and its variant the RRQR) and the SVD methods are commonly used for dense A matrices that appear in engineering and statistics. Although the Cholesky decomposition is backward stable and known to have the least operational count, several authors recommend the use of QR in applications. In this article, we take a fresh look at least squares problems for dense A matrices with full column rank using numerical experiments guided by recent results from the theory of random matrices. Contrary to currently accepted belief, comparisons of the sensitivity of the Cholesky and QR solutions to random parameter perturbations for various low to moderate condition numbers show no significant difference to within machine precision. Experiments for matrices with artificially high condition numbers reveal that the relative difference in the two solutions is on average only of the order of 10−6. Finally, Cholesky is found to be markedly computationally faster than QR – the mean computational time for QR is between two and four times greater than Cholesky, and the standard deviation in computation times using Cholesky is about a third of that of QR. Our conclusion in this article is that for systems with Ax = b where A has full column rank, if the condition numbers are low or moderate, then the normal equation method with Cholesky decomposition is preferable to QR. Key words. Least squares problems, QR decomposition, Choleksy decomposition, random matrix, statistics. 1.
AND
, 1995
"... We present new perturbation analyses, for the Cholesky factorization A = RJR of a symmetric positive definite matrix A. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any symmetric pivoting used in PAP1 = ..."
Abstract
 Add to MetaCart
We present new perturbation analyses, for the Cholesky factorization A = RJR of a symmetric positive definite matrix A. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any symmetric pivoting used in PAP1 = RTR, and both numerical results and an analysis show that the standard method of pivoting is optimal in that it usually leads to a condition number very close to its lower limit for any given A. It follows that the computed R will probably have greatest accuracy when we use the standard symmetric pivoting strategy. Initially we give a thorough analysis to obtain both firstorder and strict normwise perturbation bounds which are as tight as possible, leading to a definition of an optimal condition number for the problem. Then we use this approach to obtain reasonably clear firstorder and strict componentwise perturbation bounds. We complete the work by giving a much simpler normwise analysis which provides a somewhat weaker bound, but which allows us to estimate the condition of the problem quite well with an efficient computation. This simpler analysis also shows why the factorization is often less sensitive than we previously thought, and adds further insight into why pivoting usually gives such good results. We derive a useful upper bound on the condition of the problem when we use pivoting. 1.