Results 1  10
of
22
Analysis of Acceleration Strategies for Restarted Minimal Residual Methods
, 2000
"... We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct precondit ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
We provide an overview of existing strategies which compensate for the deterioration of convergence of minimum residual (MR) Krylov subspace methods due to restarting. We evaluate the popular practice of using nearly invariant subspaces to either augment Krylov subspaces or to construct preconditioners which invert on these subspaces. In the case where these spaces are exactly invariant, the augmentation approach is shown to be superior. We further show how a strategy recently introduced by de Sturler for truncating the approximation space of an MR method can be interpreted as a controlled loosening of the condition for global MR approximation based on the canonical angles between subspaces. For the special case of Krylov subspace methods, we give a concise derivation of the role of Ritz and harmonic Ritz values and vectors in the polynomial description of Krylov spaces as well as of the use of the implicitly updated Arnoldi method for manipulating Krylov spaces.
ON THE NUMERICAL EVALUATION OF FREDHOLM DETERMINANTS
, 804
"... Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical treatment of Fredholm determinants to be found in the literature. Instead, the few numerical evaluations that are available rely on eigenfunction expansions of the operator, if expressible in terms of special functions, or on alternative, numerically more straightforwardly accessible analytic expressions, e.g., in terms of Painlevé transcendents, that have masterfully been derived in some cases. In this paper we close the gap in the literature by studying projection methods and, above all, a simple, easily implementable, general method for the numerical evaluation of Fredholm determinants that is derived from the classical Nyström method for the solution of Fredholm equations of the second kind. Using Gauss–Legendre or Clenshaw– Curtis as the underlying quadrature rule, we prove that the approximation error essentially behaves like the quadrature error for the sections of the kernel. In particular, we get exponential convergence for analytic kernels, which are typical in random matrix theory. The application of the method to the distribution functions of the Gaussian unitary ensemble (GUE), in the bulk and the edge scaling limit, is discussed in detail. After extending the method to systems of integral operators, we evaluate the twopoint correlation functions of the more recently studied Airy and Airy 1 processes. Key words. Fredholm determinant, Nyström’s method, projection method, trace class operators, random
Computing Symmetric RankRevealing Decompositions Via Triangular Factorization
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2001
"... We present a family of algorithms for computing symmetric rankrevealing VSV decompositions, based on triangular factorization of the matrix. The VSV decomposition consists of a middle symmetric matrix that reveals the numerical rank in having three blocks with small norm, plus an orthogonal matrix ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present a family of algorithms for computing symmetric rankrevealing VSV decompositions, based on triangular factorization of the matrix. The VSV decomposition consists of a middle symmetric matrix that reveals the numerical rank in having three blocks with small norm, plus an orthogonal matrix whose columns span approximations to the numerical range and null space. We show that for semidefinite matrices the VSV decomposition should be computed via the ULV decomposition, while for indefinite matrices it must be computed via a URVlike decomposition that involves hypernormal rotations.
On the Numerical Evaluation of Distributions in Random Matrix Theory: A Review
, 2010
"... Abstract. In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and their various scaling limits are discussed. We argue that the numerical approximation of Fredholm determinants is the conceptually more simple and efficient of the two approaches, easily generalized to the computation of joint probabilities and correlations. Having the means for extensive numerical explorations at hand, we discovered new and surprising determinantal formulae for the kth largest (or smallest) level in the edge scaling limits of the Orthogonal and Symplectic Ensembles; formulae that in turn led to improved numerical evaluations. The paper comes with a toolbox of Matlab functions that facilitates further mathematical experiments by the reader.
Additive Preconditioning, Eigenspaces, and the Inverse Iteration ∗
"... We incorporate our recent preconditioning techniques into the classical inverse power (Rayleigh quotient) iteration for computing matrix eigenvectors. Every loop of this iteration essentially amounts to solving an ill conditioned linear system of equations. Due to our modification we solve a well co ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We incorporate our recent preconditioning techniques into the classical inverse power (Rayleigh quotient) iteration for computing matrix eigenvectors. Every loop of this iteration essentially amounts to solving an ill conditioned linear system of equations. Due to our modification we solve a well conditioned linear system instead. We prove that this modification preserves local quadratic convergence, show experimentally that fast global convergence is preserved as well, and yield similar results for higher order inverse iteration, covering the cases of multiple and clustered eigenvalues. Key words:
Solving Linear Systems of Equations with Randomization
 Augmentation and Aggregation, Linear Algebra and Its Applications
"... Seeking a basis for the null space of a rectangular and possibly rank deficient and ill conditioned matrix we apply randomization, augmentation, and aggregation to reduce our task to computations with well conditioned matrices of full rank. Our algorithms avoid pivoting and orthogonalization, preser ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Seeking a basis for the null space of a rectangular and possibly rank deficient and ill conditioned matrix we apply randomization, augmentation, and aggregation to reduce our task to computations with well conditioned matrices of full rank. Our algorithms avoid pivoting and orthogonalization, preserve matrix structure and sparseness, and in the case of an ill conditioned input perform only a small part of the computations with high accuracy. We extend the algorithms to the solution of nonhomogeneous nonsingular ill conditioned linear systems of equations whose matrices have small numerical nullities. Our estimates and experiments show dramatic progress versus the customary matrix algorithms where the input matrices are rank deficient or ill conditioned. Our study can be of independent technical interest: we extend the known results on conditioning of random matrices to randomized preconditioning, estimate the condition numbers of randomly augmented matrices, and link augmentation to aggregation as well as homogeneous to nonhomogeneous linear systems of equations. AMS Classification:
Randomized Preprocessing of Homogeneous Linear Systems
, 2009
"... Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove num ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove numerical stability of our randomized algorithms and extend our approach to solving nonsingular linear systems, inversion and generalized (Moore–Penrose) inversion of general and structured matrices by means of Newton’s iteration, approximation of a matrix by a nearby matrix that has a smaller rank or a smaller displacement rank, matrix eigensolving, and rootfinding for polynomial and secular equations. Some byproducts and extensions of our study can be of independent technical intersest, e.g., our extensions of the Sherman–Morrison– Woodbury formula for matrix inversion, our estimates for the condition number of randomized matrix products, preprocessing via augmentation, and the link of preprocessing to aggregation. Key words: Linear systems of equations, Randomized preprocessing, Conditioning
Solving Linear Systems with Randomized Augmentation ∗
"... Our randomized preprocessing of a matrix by means of augmentation counters its degeneracy and ill conditioning, uses neither pivoting nor orthogonalization, readily preserves matrix structure and sparseness, and leads to dramatic speedup of the solution of general and structured linear systems of eq ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Our randomized preprocessing of a matrix by means of augmentation counters its degeneracy and ill conditioning, uses neither pivoting nor orthogonalization, readily preserves matrix structure and sparseness, and leads to dramatic speedup of the solution of general and structured linear systems of equations in terms of both estimated arithmetic time and observed CPU time.
Randomized and Derandomized Matrix Computations ∗
"... We propose new techniques and algorithms that advance the known methods for a number of fundamental problems of matrix computations. These problems includes approximation of leading and trailing singular spaces of a matrix with extensions to derandomized approximation by lowrank matrices and by str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We propose new techniques and algorithms that advance the known methods for a number of fundamental problems of matrix computations. These problems includes approximation of leading and trailing singular spaces of a matrix with extensions to derandomized approximation by lowrank matrices and by structured matrices, support for numerically safe Gaussian elimination with no pivoting, and devising effective preconditioners that cover the general class of matrices having a small numerical nullity or a small numerical rank. Our technical novelties include randomized additive preconditioning and augmentation for general and structured matrices, derandomization, and dual extension of the Sherman–Morrison–Woodbury formula. Our extensive tests demonstrate effectiveness of the proposed algorithms.