Results 1  10
of
33
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 86 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Bounds for the entries of matrix functions with applications to preconditioning
 BIT
, 1999
"... Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
(Show Context)
Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away from the main diagonal. Bounds obtained by representing the entries of f(A) in terms of Riemann–Stieltjes integrals and by approximating such integrals by Gaussian quadrature rules are also considered. Applications of these bounds to preconditioning are suggested and illustrated by a few numerical examples.
Efficient generalized crossvalidation with applications to parametric image restoration and resolution enhancement
 IEEE Trans. Image Processing
, 2001
"... Abstract—In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this illposed class of inverse problem from raw da ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this illposed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized crossvalidation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Datadriven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method. Index Terms—Blind restoration, blur identification, generalized crossvalidation, quadrature rules, superresolution. I.
Some largescale matrix computation problems
, 1996
"... There are numerous applications in physics, statistics and electrical circuit simulation where it is required to bound entries and the trace of the inverse and the determinant of a large sparse matrix. All these computational tasks are related to the central mathematical problem studied in this pape ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
There are numerous applications in physics, statistics and electrical circuit simulation where it is required to bound entries and the trace of the inverse and the determinant of a large sparse matrix. All these computational tasks are related to the central mathematical problem studied in this paper, namely, bounding the bilinear form uXf(A)v for a given matrix A and vectors u and v, wherefis a given smooth function and is defined on the spectrum of A. We will study a practical numerical algorithm for bounding the bilinear form, where the matrix A is only referenced through matrixvector multiplications. A Monte Carlo method is also presented to efficiently estimate the trace of the inverse and the determinant of a large sparse matrix.
A Stopping Criterion for the Conjugate Gradient Algorithm in a Finite Element Method Framework
, 2002
"... The Conjugate Gradient method has always been successfully used in solving the symmetric and positive de nite systems obtained by the nite element approximation of selfadjoint elliptic partial dierential equations. Taking into account recent results by Golub and Meurant (1997), Meurant (1997), ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
The Conjugate Gradient method has always been successfully used in solving the symmetric and positive de nite systems obtained by the nite element approximation of selfadjoint elliptic partial dierential equations. Taking into account recent results by Golub and Meurant (1997), Meurant (1997), Meurant (1999a), and Strakos and Tichy (2002) which make it possible to approximate the energy norm of the error during the conjugate gradient iterative process, we adapt the stopping criterion introduced by Arioli, Noulard and Russo (2001). Moreover, we show that the use of ecient preconditioners does not require to change the energy norm used by the stopping criterion. Finally, we present the results of several numerical tests that experimentally validate the eectiveness of our stopping criterion.
Bounds for the Trace of the Inverse and the Determinant of Symmetric Positive Definite Matrices
, 1996
"... this paper, we focus on deriving lower and upper bounds for the quantities tr(A ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
this paper, we focus on deriving lower and upper bounds for the quantities tr(A
Some Large Scale Matrix Computation Problems
 J. Comput. Appl. Math
"... The central mathematical problem of this report is to bound the quantity u T f(A)v, where A is a given n \Theta n real matrix, u and v are given nvectors, and f is a given smooth function. Estimating the entries and the trace of the inverse of a matrix and the determinant of a matrix can be clas ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
The central mathematical problem of this report is to bound the quantity u T f(A)v, where A is a given n \Theta n real matrix, u and v are given nvectors, and f is a given smooth function. Estimating the entries and the trace of the inverse of a matrix and the determinant of a matrix can be classified as such problems. There are a number of interesting applications for such matrix computation problems. The applications in fractal and lattice Quantum Chromodynamics (QCD) are our new motivation for studying such problems. In these applications, the matrices involved are sparse and could be up to the order of millions. It is still a challenging problem to efficiently solve such large matrix computation problems on today's supercomputers. 1 Introduction The central problem studied in this chapter is to estimate a lower bound L and/or an upper bound U , such that L u T f(A)v U; (1) where A is an n \Theta n given real matrix, u and v are given nvectors, and f is a given smooth fun...
A Probing Method for Computing the Diagonal of the Matrix Inverse ∗
, 2010
"... The computation of some entries of a matrix inverse arises in several important applications in practice. This paper presents a probing method for determining the diagonal of the inverse of a sparse matrix in the common situation when its inverse exhibits a decay property, i.e., when many of the ent ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
The computation of some entries of a matrix inverse arises in several important applications in practice. This paper presents a probing method for determining the diagonal of the inverse of a sparse matrix in the common situation when its inverse exhibits a decay property, i.e., when many of the entries of the inverse are small. A few simple properties of the inverse suggest a way to determine effective probing vectors based on standard graph theory results. An iterative method is then applied to solve the resulting sequence of linear systems, from which the diagonal of the matrix inverse is extracted. Results of numerical experiments are provided to demonstrate the effectiveness of the probing method.
On error estimation in the conjugate gradient method and why it works in finite precision computations
 UNIVERSITY OF COLORADO AT DENVER, DEPARTMENT OF MATHEMATICS
, 2002
"... In their paper published in 1952, Hestenes and Stiefel considered the conjugate gradient (CG) method an iterative method which terminates in at most n steps if no rounding errors are encountered [24, p. 410]. They also proved identities for the Anorm and the Euclidean norm of the error which coul ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In their paper published in 1952, Hestenes and Stiefel considered the conjugate gradient (CG) method an iterative method which terminates in at most n steps if no rounding errors are encountered [24, p. 410]. They also proved identities for the Anorm and the Euclidean norm of the error which could justify the stopping criteria [24, Theorems 6:1 and 6:3, p. 416]. The idea of estimating errors in iterative methods, and in the CG method in particular, was independently (of these results) promoted by Golub; the problem was linked to Gauss quadrature and to its modifications [7], [8]. A comprehensive summary of this approach was given in [15], [16]. During the last decade several papers developed error bounds algebraically without using Gauss quadrature. However, we have not found any reference to the corresponding results in [24]. All the existing bounds assume exact arithmetic. Still they seem to be in a striking agreement with finite precision numerical experiments, though in finite precision computations they estimate quantities which can be orders of magnitude different from their exact precision counterparts! For the lower bounds obtained from Gauss quadrature formulas this nontrivial phenomenon was explained, with some limitations, in [17]. In our paper we show that the lower bound for the Anorm of the error based on Gauss quadrature ([15], [17], [16]) is mathematically equivalent to the original formula of Hestenes and Stiefel [24]. We will compare existing bounds and we will demonstrate necessity of a proper rounding error analysis: we present an example of the wellknown bound which can fail in finite precision arithmetic. We will analyse the simplest bound based on [24, Theorem 6:1], and prove that it is numerically stable. Though we concentrate mostly on the lower bound for the Anorm of the error, we describe also an estimate for the Euclidean norm of the error based on [24, Theorem 6:3]. Our results are illustrated by numerical experiments.
Computing partial eigenvalue sum in electronic structure calculations
, 1998
"... In this paper, we present an algorithm for computing a partial sum of eigenvalues of a large symmetric positive de nite matrix pair. We show that this computational task is intimately connected to compute a bilinear form u T f(A)u for a properly de ned matrix A, avector u and a function f (). Compar ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
In this paper, we present an algorithm for computing a partial sum of eigenvalues of a large symmetric positive de nite matrix pair. We show that this computational task is intimately connected to compute a bilinear form u T f(A)u for a properly de ned matrix A, avector u and a function f (). Compared to existing techniques which compute individual eigenvalues and then sum them up, the new algorithm is generally less accurate, but requires signi cantly less memory and CPU time. In the application of electronic structure calculations in molecular dynamics, the new algorithm has achieved a speedup factor of 2 for small size problems to 20 for large size problems. Relative accuracy within 0.1 % to 2 % is satisfactory. Previously intractable large size problems have been solved.