Results 1  10
of
43
Analysis of the Cholesky decomposition of a semidefinite matrix
 in Reliable Numerical Computation
, 1990
"... Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward er ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward error analysis is given; it shows that the computed Cholesky factors are the exact ones of a matrix whose distance from A is bounded by 4r(r + 1) � �W �2+1 � 2 u�A�2+O(u 2), where u is the unit roundoff. For the complete pivoting strategy it is shown that �W � 2 2 ≤ 1 3 (n −r)(4r −1), and empirical evidence that �W �2 is usually small is presented. The overall conclusion is that the Cholesky algorithm with complete pivoting is stable for semidefinite matrices. Similar perturbation results are derived for the QR decomposition with column pivoting and for the LU decomposition with complete pivoting. The results give new insight into the reliability of these decompositions in rank estimation. Key words. Cholesky decomposition, positive semidefinite matrix, perturbation theory, backward error analysis, QR decomposition, rank estimation, LINPACK.
Matrix nearness problems and applications
 Applications of Matrix Theory
, 1989
"... A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, orthogonality, normality, rankdeficiency and instability. Theoretical results and computational methods are described. Applications of nearness problems in areas including control theory, numerical analysis and statistics are outlined.
Computing An Eigenvector With Inverse Iteration
 SIAM REVIEW
, 1997
"... The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the converge ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. In the case of nonnormal matrices, we show that the iterates converge asymptotically to an invariant subspace. However the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the no...
Approximate greatest common divisors of several polynomials with linearly constrained coefficients and singular polynomials
 Manuscript
, 2006
"... We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the defo ..."
Abstract

Cited by 33 (13 self)
 Add to MetaCart
We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the deformed coefficients by a given set of linear constraints, thus introducing the linearly constrained approximate GCD problem. We present an algorithm based on a version of the structured total least norm (STLN) method and demonstrate, on a diverse set of benchmark polynomials, that the algorithm in practice computes globally minimal approximations. As an application of the linearly constrained approximate GCD problem, we present an STLNbased method that computes for a real or complex polynomial the nearest real or complex polynomial that has a root of multiplicity at least k. We demonstrate that the algorithm in practice computes, on the benchmark polynomials given in the literature, the known globally optimal nearest singular polynomials. Our algorithms can handle, via randomized preconditioning, the difficult case when the nearest solution to a list of real input polynomials actually has nonreal complex coefficients.
Perturbation Analyses for the QR Factorization
 SIAM J. Matrix Anal. Appl
, 1997
"... This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any c ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
(Show Context)
This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any column pivoting used in AP = Q1R, and the condition numbers for R are bounded for a fixed n when the standard column pivoting strategy is used. This strategy tends to improve the condition of Q 1 , so the computed Q 1 and R will probably both have greatest accuracy when we use the standard column pivoting strategy. First order normwise perturbation analyses are given for both Q 1 and R. It is seen that the analysis for R may be approached in two ways  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute and not very intuitive, and a perhaps simpler "matrix equation" analysis which provides results that are usually weaker but easier to interpret, and which allow efficient computation of a satisfactory estimate for the true condition number. Key Words. QR factorization, perturbation analysis, condition estimation, matrix equations, pivoting AMS Subject Classifications: 15A23, 65F35 1.
On Approximate Irreducibility of Polynomials in Several Variables
"... We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due to Ruppert, we are able to find useful separation bounds, in several norms, for bivariate polynomials. We also use Ruppert's criterion to derive new, more effective Noether forms for polynomials of arbitrarily many variables. These forms lead to small separation bounds for polynomials of arbitrarily many variables.
A SUBSPACE ERROR ESTIMATE FOR LINEAR SYSTEMS
, 2003
"... This paper proposes a new method for estimating the error in the solution of linear systems.A condition number is defined for a linear function of the solution components.This definition of the condition number is quite versatile.It reduces to the component condition number proposed by Chandrasekara ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper proposes a new method for estimating the error in the solution of linear systems.A condition number is defined for a linear function of the solution components.This definition of the condition number is quite versatile.It reduces to the component condition number proposed by Chandrasekaran and Ipsen [SIAM J.Matri Anal. Appl., 16 (1995), pp. 93112] and to Skeel's definition of condition number [J. ACM, 26 (1979), pp.494526] in some special cases, and it can be used to estimate the error in a subspace.The estimate is based on the adjoint equation in combination with small sample statistical theory.It can be implemented simply and is inexpensive to compute.Numerical examples are presented which illustrate the power and e#ectiveness of this error estimate.