Results 1  10
of
31
The Test Matrix Toolbox for Matlab (version 3.0). Numerical Analysis Report No
, 1995
"... We describeversion 3.0 of the Test Matrix Toolbox forMatlab 4.2. The toolbox contains a collection of test matrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to Matlab's existing set of functions. There are 5 ..."
Abstract

Cited by 50 (15 self)
 Add to MetaCart
We describeversion 3.0 of the Test Matrix Toolbox forMatlab 4.2. The toolbox contains a collection of test matrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to Matlab's existing set of functions. There are 58 parametrized test matrices, which are mostly square, dense, nonrandom, and of arbitrary dimension. The test matrices include ones with known inverses or known eigenvalues � illconditioned or rank de cient matrices � and symmetric, positive de nite, orthogonal, defective, involutary, and totally positive matrices. The visualization routines display surface plots of a matrix and its (pseudo) inverse, the eld of values, Gershgorin disks, and two and threedimensional views of pseudospectra. The direct search optimization routines implement the alternating directions method, the multidirectional search method and the Nelder{Mead simplex method. We explain the need for collections of test matrices and summarize the features of the collection in the toolbox. We give examples of the use of the toolbox and explain some of the interesting properties of the Frank matrix and magic square matrices. The leading comment lines from all the toolbox routines are listed.
Analysis of the Cholesky decomposition of a semidefinite matrix
 in Reliable Numerical Computation
, 1990
"... Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward er ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward error analysis is given; it shows that the computed Cholesky factors are the exact ones of a matrix whose distance from A is bounded by 4r(r + 1) � �W �2+1 � 2 u�A�2+O(u 2), where u is the unit roundoff. For the complete pivoting strategy it is shown that �W � 2 2 ≤ 1 3 (n −r)(4r −1), and empirical evidence that �W �2 is usually small is presented. The overall conclusion is that the Cholesky algorithm with complete pivoting is stable for semidefinite matrices. Similar perturbation results are derived for the QR decomposition with column pivoting and for the LU decomposition with complete pivoting. The results give new insight into the reliability of these decompositions in rank estimation. Key words. Cholesky decomposition, positive semidefinite matrix, perturbation theory, backward error analysis, QR decomposition, rank estimation, LINPACK.
Computing An Eigenvector With Inverse Iteration
 SIAM Review
, 1997
"... . The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
. The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. In the case of nonnormal matrices, we show that the iterates converge asymptotically to an invariant subspace. However the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the no...
Matrix nearness problems and applications
 Applications of Matrix Theory
, 1989
"... A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, orthogonality, normality, rankdeficiency and instability. Theoretical results and computational methods are described. Applications of nearness problems in areas including control theory, numerical analysis and statistics are outlined.
On Approximate Irreducibility of Polynomials in Several Variables
"... We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due to Ruppert, we are able to find useful separation bounds, in several norms, for bivariate polynomials. We also use Ruppert's criterion to derive new, more effective Noether forms for polynomials of arbitrarily many variables. These forms lead to small separation bounds for polynomials of arbitrarily many variables.
Perturbation Analyses for the QR Factorization
 SIAM J. Matrix Anal. Appl
, 1997
"... This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any c ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
This paper gives perturbation analyses for Q 1 and R in the QR factorization A = Q 1 R, Q T 1 Q 1 = I, for a given real m \Theta n matrix A of rank n. The analyses more accurately reflect the sensitivity of the problem than previous normwise results. The condition numbers here are altered by any column pivoting used in AP = Q1R, and the condition numbers for R are bounded for a fixed n when the standard column pivoting strategy is used. This strategy tends to improve the condition of Q 1 , so the computed Q 1 and R will probably both have greatest accuracy when we use the standard column pivoting strategy. First order normwise perturbation analyses are given for both Q 1 and R. It is seen that the analysis for R may be approached in two ways  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute and not very intuitive, and a perhaps simpler "matrix equation" analysis which provides results that are usually weaker but easier to interpret, and which allow efficient computation of a satisfactory estimate for the true condition number. Key Words. QR factorization, perturbation analysis, condition estimation, matrix equations, pivoting AMS Subject Classifications: 15A23, 65F35 1.
Approximate greatest common divisors of several polynomials with linearly constrained coefficients and singular polynomials
 Manuscript
, 2006
"... We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the defo ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the deformed coefficients by a given set of linear constraints, thus introducing the linearly constrained approximate GCD problem. We present an algorithm based on a version of the structured total least norm (STLN) method and demonstrate, on a diverse set of benchmark polynomials, that the algorithm in practice computes globally minimal approximations. As an application of the linearly constrained approximate GCD problem, we present an STLNbased method that computes for a real or complex polynomial the nearest real or complex polynomial that has a root of multiplicity at least k. We demonstrate that the algorithm in practice computes, on the benchmark polynomials given in the literature, the known globally optimal nearest singular polynomials. Our algorithms can handle, via randomized preconditioning, the difficult case when the nearest solution to a list of real input polynomials actually has nonreal complex coefficients.
New Perturbation Analyses For The Cholesky Factorization
, 1995
"... this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the pertu ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
this paper is to establish new first order bounds on the norm of the perturbation in the Cholesky factor, sharper than that of Sun (1991) and Stewart (1993). Also, we obtain a new first order bound for the components of the perturbation, and give strict bounds on the norm and components of the perturbation. In the remainder of this section we review some useful tools and results by showing one way of obtaining the first order normwise perturbation bound given by Sun (1991) and Stewart (1993) for the Cholesky factor. Theorem 1 Let A 2 R n\Thetan be symmetric positive definite, with the Cholesky factorization A = R T R. Let \DeltaA 2 R n\Thetan be symmetric. If ffl j k\DeltaAk F =kAk 2 satisfies 2 (A)ffl ! 1; (1) where 2 (A) j kAk 2 kA \Gamma1 k 2 , then A+ \DeltaA has the Cholesky factorization A+ \DeltaA = (R + \DeltaR) T (R + \DeltaR); where k\DeltaRk F kRk 2