Results 1  10
of
20
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 886 (35 self)
 Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, pr...
Collinearity and Least Squares Regression
 Statistical Science
, 1987
"... this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simp ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simple regression diagnostics, suitable for incorporation in regression packages. Keywords and phrases: collinearity, illconditioning, linear regression, errors in the variables, regression diagnostics. 1 Introduction
A Survey of Componentwise Perturbation Theory in Numerical Linear Algebra
 in Mathematics of Computation 19431993: A Half Century of Computational Mathematics
, 1994
"... . Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller an ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
. Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller and more revealing bounds can be obtained. A survey is given of componentwise perturbation theory in numerical linear algebra, covering linear systems, the matrix inverse, matrix factorizations, the least squares problem, and the eigenvalue and singular value problems. Most of the results described have been published in the last five years. Our hero is the intrepid, yet sensitive matrix A. Our villain is E, who keeps perturbing A. When A is perturbed he puts on a crumpled hat: e A = A+E. G. W. Stewart and J.G. Sun, Matrix Perturbation Theory (1990) 1. Introduction Matrix analysis would not have developed into the vast subject it is today without the concept of representing a matrix by ...
ArnoldiTikhonov regularization methods
, 2008
"... Tikhonov regularization for largescale linear illposed problems is commonly implemented by determining a partial Lanczos bidiagonalization of the matrix of the given system of equations. This paper explores the possibility of instead computing a partial Arnoldi decomposition of the given matrix. C ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Tikhonov regularization for largescale linear illposed problems is commonly implemented by determining a partial Lanczos bidiagonalization of the matrix of the given system of equations. This paper explores the possibility of instead computing a partial Arnoldi decomposition of the given matrix. Computed examples illustrate that this approach may require fewer matrixvector product evaluations and, therefore, less arithmetic work. Moreover, the proposed rangerestricted ArnoldiTikhonov regularization method does not require the adjoint matrix and, hence, is convenient to use for problems for which the adjoint is difficult to evaluate.
Optimal Sensitivity Analysis of Linear Least Squares
, 2003
"... Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least s ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least squares problems. It is shown that such formulas can be used to evaluate condition numbers. For full rank problems, Frobenius norm condition numbers are determined exactly, and spectral norm condition numbers are determined within a factor of squareroottwo. As a result, the necessary and sufficient criteria for well conditioning are established. A source of ill conditioning is found that helps explain the failure of simple iterative refinement. Some textbook discussions of ill conditioning are found to be fallacious, and some error bounds in the literature are found to unnecessarily overestimate the error. Finally, several open questions are described.
Iterative Methods for IllConditioned Linear Systems From Optimization
, 1998
"... Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restr ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Preconditioned conjugategradient methods are proposed for solving the illconditioned linear systems which arise in penalty and barrier methods for nonlinear minimization. The preconditioners are chosen so as to isolate the dominant cause of ill conditioning. The methods are stablized using a restricted form of iterative refinement. Numerical results illustrate the approaches considered. 1 Email : n.gould@rl.ac.uk 2 Current reports available from "http://www.rl.ac.uk/departments/ccd/numerical/reports/reports.html". Department for Computation and Information Atlas Centre Rutherford Appleton Laboratory Oxfordshire OX11 0QX August 26, 1998. 1 INTRODUCTION 1 1 Introduction Let A and H be, respectively, fullrank m by n (m n) and symmetric n by n real matrices. Suppose furthermore that any nonzero coefficients in this data are modest, that is the data is O(1). (1) We consider the iterative solution of the linear system (H +A T D \Gamma1 A)x = b (1.1) where b is modest an...
Stability of Fast Algorithms for Structured Linear Systems
, 1997
"... . We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destro ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
. We survey the numerical stability of some fast algorithms for solving systems of linear equations and linear least squares problems with a low displacementrank structure. For example, the matrices involved may be Toeplitz or Hankel. We consider algorithms which incorporate pivoting without destroying the structure, and describe some recent results on the stability of these algorithms. We also compare these results with the corresponding stability results for the well known algorithms of Schur/Bareiss and Levinson, and for algorithms based on the seminormal equations. Key words. Bareiss algorithm, Levinson algorithm, Schur algorithm, Toeplitz matrices, displacement rank, generalized Schur algorithm, numerical stability. AMS subject classifications. 65F05, 65G05, 47B35, 65F30 1. Motivation. The standard direct method for solving dense n \Theta n systems of linear equations is Gaussian elimination with partial pivoting. The usual implementation requires of order n 3 arithmetic op...
Efficient computation of condition estimates for linear least squares problems, Research Report RR8065, INRIA, 2012, Submitted to Numerical Algorithms
"... Linear least squares (LLS) is a classical linear algebra problem in scientific computing, arising for instance in many parameter estimation problems. In addition to computing efficiently LLS solutions, an important issue is to assess the numerical quality of the computed solution. The notion of c ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Linear least squares (LLS) is a classical linear algebra problem in scientific computing, arising for instance in many parameter estimation problems. In addition to computing efficiently LLS solutions, an important issue is to assess the numerical quality of the computed solution. The notion of conditioning provides a theoretical framework that can be used to measure the numerical sensitivity of a problem solution to perturbations in its data. We recall some results for least squares conditioning and we derive a statistical estimate for the conditioning of an LLS solution. We present numerical experiments to compare exact values and statistical estimates. We also propose performance results using new routines on top of the multicoreGPU library MAGMA. This set of routines is based on an efficient computation of the variancecovariance matrix for which, to our knowledge, there is no implementation in current public domain libraries LAPACK and ScaLAPACK.
Iterative refinement schemes for minimum 2norm solution of linear underdetermined systems
 Dept
, 2004
"... Abstract. In a recent paper, Dax has given numerical evidence of the advantages of using a modified fixed precision iterative refinement (updated residual) instead of the classical one (recomputed residual) when a linear least squares problem is solved via the corrected seminormal equations of firs ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In a recent paper, Dax has given numerical evidence of the advantages of using a modified fixed precision iterative refinement (updated residual) instead of the classical one (recomputed residual) when a linear least squares problem is solved via the corrected seminormal equations of first kind with an accurate computation of the residual in mind. In this note we answer in the affirmative the natural question of whether Dax’s result remains valid when an accurate computation of the minimum 2norm solution of a linear underdetermined system is to be obtained via the corrected seminormal equations of second kind. Key words. fixed precision iterative refinement, linear least squares, minimum norm solution, linear underdetermined system, corrected seminormal equations of second kind