Results 1  10
of
40
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 678 (33 self)
 Add to MetaCart
(Show Context)
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, pr...
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 60 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Separable Nonlinear Least Squares: the Variable Projection Method and its Applications
 Institute of Physics, Inverse Problems
, 2002
"... this paper nonlinear data fitting problems which have as their underlying model a linear combination of nonlinear functions. More generally, one can also consider that there are two sets of unknown parameters, where one set is dependent on the other and can be explicitly eliminated. Models of this t ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
(Show Context)
this paper nonlinear data fitting problems which have as their underlying model a linear combination of nonlinear functions. More generally, one can also consider that there are two sets of unknown parameters, where one set is dependent on the other and can be explicitly eliminated. Models of this type are very common and we will show a variety of applications in different fields. Inasmuch as many inverse problems can be viewed as nonlinear data fitting problems, this material will be of interest to a wide crosssection of researchers and practitioners in parameter, material or system identification, signal analysis, the analysis of spectral data, medical and biological imaging, neural networks, robotics, telecommunications and model order reduction, to name a few
A Survey of Componentwise Perturbation Theory in Numerical Linear Algebra
 in Mathematics of Computation 19431993: A Half Century of Computational Mathematics
, 1994
"... . Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller an ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. Perturbation bounds in numerical linear algebra are traditionally derived and expressed using norms. Norm bounds cannot reflect the scaling or sparsity of a problem and its perturbation, and so can be unduly weak. If the problem data and its perturbation are measured componentwise, much smaller and more revealing bounds can be obtained. A survey is given of componentwise perturbation theory in numerical linear algebra, covering linear systems, the matrix inverse, matrix factorizations, the least squares problem, and the eigenvalue and singular value problems. Most of the results described have been published in the last five years. Our hero is the intrepid, yet sensitive matrix A. Our villain is E, who keeps perturbing A. When A is perturbed he puts on a crumpled hat: e A = A+E. G. W. Stewart and J.G. Sun, Matrix Perturbation Theory (1990) 1. Introduction Matrix analysis would not have developed into the vast subject it is today without the concept of representing a matrix by ...
On mixed and componentwise condition numbers for MoorePenrose inverse and linear least squares problems
 Mathematics of Computation
, 2007
"... Abstract. Classical condition numbers are normwise: they measure the size of both input perturbations and output errors using some norms. To take into account the relative of each data component, and, in particular, a possible data sparseness, componentwise condition numbers have been increasingly c ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Classical condition numbers are normwise: they measure the size of both input perturbations and output errors using some norms. To take into account the relative of each data component, and, in particular, a possible data sparseness, componentwise condition numbers have been increasingly considered. These are mostly of two kinds: mixed and componentwise. In this paper, we give explicit expressions, computable from the data, for the mixed and componentwise condition numbers for the computation of the Moore–Penrose inverse as well as for the computation of solutions and residues of linear least squares problems. In both cases the data matrices have full column (row) rank. 1.
Numerical Behaviour of the Modified GramSchmidt GMRES Implementation
, 1997
"... . In [6] the Generalized Minimal Residual Method (GMRES) which constructs the Arnoldi basis and then solves the transformed least squares problem was studied. It was proved that GMRES with the Householder orthogonalization  based implementation of the Arnoldi process (HHA), see [9], is backward st ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
. In [6] the Generalized Minimal Residual Method (GMRES) which constructs the Arnoldi basis and then solves the transformed least squares problem was studied. It was proved that GMRES with the Householder orthogonalization  based implementation of the Arnoldi process (HHA), see [9], is backward stable. In practical computations, however, the Householder orthogonalization is too expensive, and it is usually replaced by the modified GramSchmidt process (MGSA). Unlike the HHA case, in the MGSA implementation the orthogonality of the Arnoldi basis vectors is not preserved near the level of machine precision. Despite this, the MGSA GMRES performs surprisingly well, and its convergence behaviour and the ultimately attainable accuracy do not differ significantly from those of the HHA GMRES. As it was observed, but not explained, in [6], it is the linear independence of the Arnoldi basis, not the orthogonality near machine precision, that is important. Until the linear independence of the b...
Optimization and Regularization of Nonlinear Least Squares Problems
, 1996
"... An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squa ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squares problem. This thesis mainly contributes to the development of tools, techniques, and theories for nonlinear least squares problems that lack a welldefined solution. Specifically, the intention is to generalize regularization methods for linear inverse problems to also handle nonlinear inverse problems. The investigation started by considering an exactly rankdeficient problem, i.e., a problem with a dependency among the parameters. It turns out that such a problem can be formulated as a nonlinear minimum norm problem. To solve this optimization problem two regularization methods are proposed: A GaussNewton Tikhonov regularized method and a minimum norm GaussNewton method. It is shown t...
Accuracy and Stability of the Null Space Method for Solving the Equality Constrained Least Squares Problem
 BIT
, 1999
"... The null space method is a standard method for solving the linear least squares problem subject to equality constraints (the LSE problem). We show that three variants of the method, including one used in LAPACK that is based on the generalized QR factorization, are numerically stable. We derive two ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The null space method is a standard method for solving the linear least squares problem subject to equality constraints (the LSE problem). We show that three variants of the method, including one used in LAPACK that is based on the generalized QR factorization, are numerically stable. We derive two perturbation bounds for the LSE problem: one of standard form that is not attainable, and a bound that yields the condition number of the LSE problem to within a small constant factor. By combining the backward error analysis and perturbation bounds we derive an approximate forward error bound suitable for practical computation. Numerical experiments are given to illustrate the sharpness of this bound. Key words: Constrained least squares problem, null space method, rounding error analysis, condition number, generalized QR factorization, LAPACK
Regularization Methods for Nonlinear Least Squares Problems. Part I: Exactly Rankdeficiency
, 1998
"... An optimization problem that does not have an unique local minimum is often very difficult to solve. For a nonlinear least squares problem this is the case when the Jacobian is rank deficient in a neighborhood of a local minimum. Moreover, a GaussNewton method such as LevenbergMarquardt will have ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
An optimization problem that does not have an unique local minimum is often very difficult to solve. For a nonlinear least squares problem this is the case when the Jacobian is rank deficient in a neighborhood of a local minimum. Moreover, a GaussNewton method such as LevenbergMarquardt will have very slow convergence for such a problem. We analyze these problems where the Jacobian is rank deficient and suggest other problem formulations more suitable for GaussNewton methods. The two methods we propose are a truncated GaussNewton method and a GaussNewton method based on the Tikhonov regularized nonlinear least squares problem. We test the methods on artificial problems where the rank of the Jacobian and the nonlinearity of the problem may be chosen making it possible to show the different features of the problem and the methods. The conclusion from the analysis and the tests is that the two methods have similar local convergence properties. The method based on Tikhononv regulari...
Optimal Sensitivity Analysis of Linear Least Squares
, 2003
"... Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least s ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least squares problems. It is shown that such formulas can be used to evaluate condition numbers. For full rank problems, Frobenius norm condition numbers are determined exactly, and spectral norm condition numbers are determined within a factor of squareroottwo. As a result, the necessary and sufficient criteria for well conditioning are established. A source of ill conditioning is found that helps explain the failure of simple iterative refinement. Some textbook discussions of ill conditioning are found to be fallacious, and some error bounds in the literature are found to unnecessarily overestimate the error. Finally, several open questions are described.