Results 1  10
of
241
Parallel Preconditioning with Sparse Approximate Inverses
 SIAM J. Sci. Comput
, 1996
"... A parallel preconditioner is presented for the solution of general sparse linear systems of equations. A sparse approximate inverse is computed explicitly, and then applied as a preconditioner to an iterative method. The computation of the preconditioner is inherently parallel, and its application o ..."
Abstract

Cited by 226 (10 self)
 Add to MetaCart
only requires a matrixvector product. The sparsity pattern of the approximate inverse is not imposed a priori but captured automatically. This keeps the amount of work and the number of nonzero entries in the preconditioner to a minimum. Rigorous bounds on the clustering of the eigenvalues
Coil sensitivity encoding for fast MRI. In:
 Proceedings of the ISMRM 6th Annual Meeting,
, 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract

Cited by 193 (3 self)
 Add to MetaCart
indicates the transposed complex conjugate, and ⌿ is the n C ϫ n C receiver noise matrix (see Appendix A), which describes the levels and correlation of noise in the receiver channels. Using the unfolding matrix, signal separation is performed by where the resulting vector v has length n P and lists
Minimizing Communication in Sparse Matrix Solvers
"... Data communication within the memory system of a single processor node and between multiple nodes in a system is the bottleneck in many iterative sparse matrix solvers like CG and GMRES. Here k iterations of a conventional implementation perform k sparsematrixvectormultiplications and Ω(k) vecto ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
(k) vector operations like dot products, resulting in communication that grows by a factor of Ω(k) in both the memory and network. By reorganizing the sparsematrix kernel to compute a set of matrixvector products at once and reorganizing the rest of the algorithm accordingly, we can perform k iterations
Sparse Matrix Computations on Parallel Processor Arrays
 SIAM J. SCI. COMPUT
, 1992
"... We investigate the balancing of distributed compressed storage of large sparse matrices on a massively parallel computer. For fast computation of matrixvector and matrixmatrix products on a rectangular processor array with efficient communications along its rows and columns we require that the non ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
We investigate the balancing of distributed compressed storage of large sparse matrices on a massively parallel computer. For fast computation of matrixvector and matrixmatrix products on a rectangular processor array with efficient communications along its rows and columns we require
A Parallel GMRES Version For General Sparse Matrices
 Electronic Transactions on Numerical Analysis
, 1995
"... . This paper describes the implementation of a parallel variant of GMRES on Paragon. This variant builds an orthonormal Krylov basis in two steps: it first computes a Newton basis then orthogonalises it. The first step requires matrixvector products with a general sparse unsymmetric matrix and the ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
and the second step is a QR factorisation of a rectangular matrix with few long vectors. The algorithm has been implemented for a distributed memory parallel computer. The distributed sparse matrixvector product avoids global communications thanks to the initial setup of the communication pattern. The QR
A RINGBASED PARALLEL OIL RESERVOIR SIMULATOR∗
"... Abstract. We develop and implement a ringbased parallel 3D oilphase homogeneous isotropic reservoir simulator and study its performance in terms of speedup as a function of problem size. The ringbased approach is shown to result in significant improvement in speedup as the problem size increases ..."
Abstract
 Add to MetaCart
increases. This improvement stems from the reduction in communication costs inherent in a ringbased approach. The simulator employs a parallel conjugate gradient (CG) algorithm that we develop for solving the associated system of linear equations. The parallelization uses an MPI programming model
Trident: A Scalable Architecture for Scalar, Vector, and Matrix Operations
"... Within a few years it will be possible to integrate a billion transistors on a single chip. At this integration level, we propose using a high level ISA to express parallelism to hardware instead of using a huge transistor budget to dynamically extract it. Since the fundamental data structures for a ..."
Abstract
 Add to MetaCart
vector and matrix register files to perform vector, matrix, and matrixvector operations. One key point of our design is the exploitation of up to three levels of data parallelism. Another key point is the ring register files for storing vector and matrix data. The ring structure of the register files
Subspace Communication
, 2014
"... We are surrounded by electronic devices that take advantage of wireless technologies, from our computer mice, which require little amounts of information, to our cellphones, which demand increasingly higher data rates. Until today, the coexistence of such a variety of services has been guaranteed by ..."
Abstract
 Add to MetaCart
of the spectrum by legacy systems. Cognitive radio exhibits a tremendous promise for increasing the spectral efficiency for future wireless systems. Ideally, new secondary users would have a perfect panorama of the spectrum usage, and would opportunistically communicate over the available resources without
Minimum Variance Estimation of a Sparse Vector Within the Linear Gaussian Model: An
"... Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a characterizat ..."
Abstract
 Add to MetaCart
Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a
Parallel Multiplication of a Vector by a Kronecker Tensor Product of Matrices. Parallel numerical linear algebra
, 2001
"... Abstract Dierent parallel algorithms are designed and evaluated for computing the multipli cation of a vector by a Kronecker tensor product of elementary matrices The algorithms are based on an analytic computation model together with some algebraic properties of the Kronecker multi plication Fro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract Dierent parallel algorithms are designed and evaluated for computing the multipli cation of a vector by a Kronecker tensor product of elementary matrices The algorithms are based on an analytic computation model together with some algebraic properties of the Kronecker multi plication
Results 1  10
of
241