Results 1  10
of
12
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 617 (31 self)
 Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, pr...
Collinearity and Least Squares Regression
 Statistical Science
, 1987
"... this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simp ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simple regression diagnostics, suitable for incorporation in regression packages. Keywords and phrases: collinearity, illconditioning, linear regression, errors in the variables, regression diagnostics. 1 Introduction
A Survey of Componentwise Perturbation Theory in Numerical Linear Algebra
 in Mathematics of Computation 19431993: A Half Century of Computational Mathematics
, 1994
"... . Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller an ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
. Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller and more revealing bounds can be obtained. A survey is given of componentwise perturbation theory in numerical linear algebra, covering linear systems, the matrix inverse, matrix factorizations, the least squares problem, and the eigenvalue and singular value problems. Most of the results described have been published in the last five years. Our hero is the intrepid, yet sensitive matrix A. Our villain is E, who keeps perturbing A. When A is perturbed he puts on a crumpled hat: e A = A+E. G. W. Stewart and J.G. Sun, Matrix Perturbation Theory (1990) 1. Introduction Matrix analysis would not have developed into the vast subject it is today without the concept of representing a matrix by ...
Optimal Sensitivity Analysis of Linear Least Squares
, 2003
"... Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least s ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least squares problems. It is shown that such formulas can be used to evaluate condition numbers. For full rank problems, Frobenius norm condition numbers are determined exactly, and spectral norm condition numbers are determined within a factor of squareroottwo. As a result, the necessary and sufficient criteria for well conditioning are established. A source of ill conditioning is found that helps explain the failure of simple iterative refinement. Some textbook discussions of ill conditioning are found to be fallacious, and some error bounds in the literature are found to unnecessarily overestimate the error. Finally, several open questions are described.
Iterative Methods for IllConditioned Linear Systems From Optimization
, 1998
"... Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restr ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restricted form of iterative refinement. Numerical results illustrate the approaches considered. 1 Email : n.gould@rl.ac.uk 2 Current reports available from "http://www.rl.ac.uk/departments/ccd/numerical/reports/reports.html". Department for Computation and Information Atlas Centre Rutherford Appleton Laboratory Oxfordshire OX11 0QX August 26, 1998. 1 INTRODUCTION 1 1 Introduction Let A and H be, respectively, fullrank m by n (m n) and symmetric n by n real matrices. Suppose furthermore that any nonzero coefficients in this data are modest, that is the data is O(1). (1) We consider the iterative solution of the linear system (H +A T D \Gamma1 A)x = b (1.1) where b is modest an...
Stability of Fast Algorithms for Structured Linear Systems
, 1997
"... . We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destro ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destroying the structure, and describe some recent results on the stability of these algorithms. We also compare these results with the corresponding stability results for the well known algorithms of Schur/Bareiss and Levinson, and for algorithms based on the seminormal equations. Key words. Bareiss algorithm, Levinson algorithm, Schur algorithm, Toeplitz matrices, displacement rank, generalized Schur algorithm, numerical stability. AMS subject classifications. 65F05, 65G05, 47B35, 65F30 1. Motivation. The standard direct method for solving dense n \Theta n systems of linear equations is Gaussian elimination with partial pivoting. The usual implementation requires of order n 3 arithmetic op...
A weakly stable algorithm for general Toeplitz systems
 Numerical Algorithms
, 1995
"... We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R T R is close to A T A. Thus, when the algorithm is used to solve the seminormal equations R T Rx = A T b, we obtain a weakly stable method for the solution of a nonsingular T ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R T R is close to A T A. Thus, when the algorithm is used to solve the seminormal equations R T Rx = A T b, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the fullrank Toeplitz or Hankel least squares problem min �Ax − b�2.
Computational Linear Algebra
, 1999
"... CONTENTS. 1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2. Errors and Computer Arithmetic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ..."
Abstract
 Add to MetaCart
CONTENTS. 1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 2. Errors and Computer Arithmetic : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.1 Accuracy : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.2 Precision : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.3 Arithmetic unit errors : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2