Results 1  10
of
175
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 322 (15 self)
 Add to MetaCart
(Show Context)
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, networks and graphs, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, and power networks). The collection meets a vital need that artificiallygenerated matrices cannot meet, and is widely used by the sparse matrix algorithms community for the development and performance evaluation of sparse matrix algorithms. The collection includes software for accessing and managing the collection, from MATLAB, Fortran, and C.
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
A Priori Sparsity Patterns For Parallel Sparse Approximate Inverse Preconditioners
, 1998
"... . Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of s ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
. Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of sparsied matrices (PSM's) can be used a priori as eective approximate inverse patterns, and that the additional eort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics about the PDE's Green's function. A parallel implementation shows that PSMpatterned approximate inverses are signicantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. Key words. preconditioned iterative methods, sparse approximate inverses, graph theory, parallel computing AMS subject classications. 65F10, ...
The fast multipole method: numerical implementation
 J. Comput. Phys
, 2000
"... We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix–vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix–vector products and accele ..."
Abstract

Cited by 50 (0 self)
 Add to MetaCart
(Show Context)
We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix–vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix–vector products and accelerate the resolution of the linear system. A problem involving N degrees of freedom may be solved in CN iter N log N floating operations, where C is a constant depending on the implementation of the method. In this article several techniques allowing one to reduce the constant C are analyzed. This reduction implies a lower total CPU time and a larger range of application of the FMM. In particular, new interpolation and anterpolation schemes are proposed which greatly improve on previous algorithms. Several numerical tests are also described. These confirm the efficiency and the theoretical
Robust approximate inverse preconditioning for the conjugate gradient method
 SIAM J. SCI. COMPUT
, 2000
"... We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illcondit ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illconditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.
Sparse Approximate Inverse Preconditioning For Dense Linear Systems Arising In Computational Electromagnetics
 Numerical Algorithms
, 1997
"... . We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pat ..."
Abstract

Cited by 48 (19 self)
 Add to MetaCart
(Show Context)
. We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pattern. Some strategies for determining the nonzero pattern of an approximate inverse are described. The results of numerical experiments suggest that sparse approximate inverse preconditioning is a viable approach for the solution of largescale dense linear systems on parallel computers. Key words. Dense linear systems, preconditioning, sparse approximate inverses, complex symmetric matrices, scattering calculations, Krylov subspace methods, parallel computing. AMS(MOS) subject classification. 65F10, 65F50, 65R20, 65N38, 7808, 78A50, 78A55. 1. Introduction. In the last decade, a significant amount of effort has been spent on the simulation of electromagnetic wave propagation phenomena to ad...
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
W.: An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise
 SIAM J. Sci. Comput
, 2009
"... We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anis ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1norm. We derive the algorithm by applying the wellknown quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global qlinear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a lowcomplexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.
Wavelet Sparse Approximate Inverse Preconditioners
 BIT
, 1997
"... . There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle [21] and Chow and Saad [11] also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Ha ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
(Show Context)
. There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle [21] and Chow and Saad [11] also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. HarwellBoeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach i...