Results 1  10
of
46
Inverse free parallel spectral divide and conquer algorithms for nonsymmetric eigenproblems
 Numer. Math
, 1994
"... We discuss two inverse free, highly parallel, spectral divide and conquer algorithms: one for computing an invariant subspace of a nonsymmetric matrix and another one for computing left and right de ating subspaces of a regular matrix pencil A, B. These two closely related algorithms are based on ea ..."
Abstract

Cited by 61 (12 self)
 Add to MetaCart
We discuss two inverse free, highly parallel, spectral divide and conquer algorithms: one for computing an invariant subspace of a nonsymmetric matrix and another one for computing left and right de ating subspaces of a regular matrix pencil A, B. These two closely related algorithms are based on earlier ones of Bulgakov, Godunov and Malyshev, but improve on them in several ways. These algorithms only use easily parallelizable linear algebra building blocks: matrix multiplication and QR decomposition. The existing parallel algorithms for the nonsymmetric eigenproblem use the matrix sign function, which is faster but can be less stable than the new algorithm. Appears also as
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
The Test Matrix Toolbox for Matlab (version 3.0). Numerical Analysis Report No
, 1995
"... We describeversion 3.0 of the Test Matrix Toolbox forMatlab 4.2. The toolbox contains a collection of test matrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to Matlab's existing set of functions. There are 5 ..."
Abstract

Cited by 50 (15 self)
 Add to MetaCart
We describeversion 3.0 of the Test Matrix Toolbox forMatlab 4.2. The toolbox contains a collection of test matrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to Matlab's existing set of functions. There are 58 parametrized test matrices, which are mostly square, dense, nonrandom, and of arbitrary dimension. The test matrices include ones with known inverses or known eigenvalues � illconditioned or rank de cient matrices � and symmetric, positive de nite, orthogonal, defective, involutary, and totally positive matrices. The visualization routines display surface plots of a matrix and its (pseudo) inverse, the eld of values, Gershgorin disks, and two and threedimensional views of pseudospectra. The direct search optimization routines implement the alternating directions method, the multidirectional search method and the Nelder{Mead simplex method. We explain the need for collections of test matrices and summarize the features of the collection in the toolbox. We give examples of the use of the toolbox and explain some of the interesting properties of the Frank matrix and magic square matrices. The leading comment lines from all the toolbox routines are listed.
Fast linear algebra is stable
 In preparation
, 2006
"... In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
In [23] we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nbyn matrices can be done by any algorithm in O(n ω+η) operations for any η> 0, then it can be done stably in O(n ω+η) operations for any η> 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(n ω+η) operations. 1
Fast And Stable Algorithms For Banded Plus Semiseparable Systems Of Linear Equations
 SIAM J. MATRIX ANAL. APPL
"... We present fast and numerically stable algorithms for the solution of linear systems of equations where the coefficient matrix can be written in the form of a banded plus semiseparable matrix. Such matrices include banded matrices, semiseparable matrices, and blockdiagonal plus semiseparable matric ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We present fast and numerically stable algorithms for the solution of linear systems of equations where the coefficient matrix can be written in the form of a banded plus semiseparable matrix. Such matrices include banded matrices, semiseparable matrices, and blockdiagonal plus semiseparable matrices as special cases. Our algorithms are based on novel matrix factorizations developed specifically for matrices with such structures. We also present interesting numerical results with these algorithms.
A Block QR Algorithm and the Singular Value Decomposition
 LINEAR ALGEBRA AND ITS APPLICATIONS
, 1993
"... ..."
Robot Localization from Landmarks using Recursive Total Least Squares
 In Proceedings of the IEEE International Conference on Robotics and Automation
, 1996
"... In the robot navigation problem, noisy sensor data must be filtered to obtain the best estimate of the robot position. We propose using a Recursive Total Least Squares algorithm to obtain estimates of the robot position. We avoid several weaknesses inherent in the use of the Kalman and extended Kalm ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
In the robot navigation problem, noisy sensor data must be filtered to obtain the best estimate of the robot position. We propose using a Recursive Total Least Squares algorithm to obtain estimates of the robot position. We avoid several weaknesses inherent in the use of the Kalman and extended Kalman filters, achieving much faster convergence without good initial (a priori) estimates of the position. The performance of the method is illustrated both by simulation and on an actual mobile robot with a camera.
Recursive Total Least Squares: An Alternative to the Discrete Kalman Filter
, 1993
"... The discrete Kalman filter, which is becoming a common tool for reducing uncertainty in robot navigation, suffers from some basic limitations when used for such applications. In this paper, we describe a recursive total least squares estimator (RTLS) as an alternative to the Kalman filter, and compa ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
The discrete Kalman filter, which is becoming a common tool for reducing uncertainty in robot navigation, suffers from some basic limitations when used for such applications. In this paper, we describe a recursive total least squares estimator (RTLS) as an alternative to the Kalman filter, and compare their performances in three sets of experiments involving problems in robot navigation. In all cases, the RTLS filter converged faster and to more accuracy than the Kalman filter. 1 Introduction The discrete Kalman filter [14], commonly used for prediction and detection of signals in communication and control problems, has more recently become a popular method of reducing uncertainty in robot navigation. One of the main advantages of using the filter is that it is recursive, eliminating the necessity for storing large amounts of data. The filter is basically a recursive weighted least squares estimator of the state of a dynamical system using a given transition rule. Suppose we have a di...
UTV tools: MATLAB templates for rankrevealing UTV decompositions
 Numer. Algorithms
, 1999
"... published in Numerical Algorithms and the paper's text is reprinted here by kind permission ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
published in Numerical Algorithms and the paper's text is reprinted here by kind permission