Results 1  10
of
1,595,276
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 766 (23 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS
, 2003
"... On preconditioners for mortar discretization of elliptic problems ..."
LA Numerical Linear Algebra
"... Introduction We begin with an example to motivate our study of parallel and vector algorithms for linear algebra problems. Consider solving a system of linear equations Ax = b, where A is a given dense nbyn matrix, b is a given nby1 vector, and x is an nby1 vector of unknowns we want to compu ..."
Abstract
 Add to MetaCart
Introduction We begin with an example to motivate our study of parallel and vector algorithms for linear algebra problems. Consider solving a system of linear equations Ax = b, where A is a given dense nbyn matrix, b is a given nby1 vector, and x is an nby1 vector of unknowns we want
“Numerical Linear Algebra ” course
, 2006
"... 1 Work and memory requirements in GMRES At every step of GMRES k+1 inner products and k+1 vector updates1 are carried out, this amounts to computational work of O((k+1)n) flops at iteration k. The storage requirements of GMRES are also O((k+1)n) floating point memory places which are needed for the ..."
Abstract
 Add to MetaCart
1 Work and memory requirements in GMRES At every step of GMRES k+1 inner products and k+1 vector updates1 are carried out, this amounts to computational work of O((k+1)n) flops at iteration k. The storage requirements of GMRES are also O((k+1)n) floating point memory places which are needed for the matrix Vk+1. (Why do not we take into account the work for solving the (k + 1) × k least squares problem?) Similar estimates for the computational work and storage hold for FOM. Since the work and storage requirements grow with the iteration number k, GMRES and FOM are “inefficient ” methods. 2 GMRES(M): restarted GMRES One way to keep the work and storage requirements of GMRES bounded is to restart the method after, say, M iterations, taking the obtained approximation xM as a new initial guess: x0 = xM. This is known as the GMRES(M) method (see [3], page 73 and [1], Section 2.3.4, page 19). Since GMRES minimizes the residual norm, the residual norms (the Euclidean norms) produced by GMRES form a nonincreasing sequence: ‖r0‖2> ‖r1‖2> ‖r2‖2>.... This nice property is lost in GMRES(M) which is guaranteed to converge only for some
Numerical linear algebra in the streaming model
 In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC
, 2009
"... We give nearoptimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, lowrank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entr ..."
Abstract

Cited by 100 (18 self)
 Add to MetaCart
We give nearoptimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, lowrank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix
C ∗ algebras and numerical linear algebra
 J. Funct. Anal
"... Abstract. Given a self adjoint operator A on a Hilbert space, suppose that that one wishes to compute the spectrum of A numerically. In practice, these problems often arise in such a way that the matrix of A relative to a natural basis is “sparse”. For example, doubly infinite tridiagonal matrices a ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Abstract. Given a self adjoint operator A on a Hilbert space, suppose that that one wishes to compute the spectrum of A numerically. In practice, these problems often arise in such a way that the matrix of A relative to a natural basis is “sparse”. For example, doubly infinite tridiagonal matrices
Results 1  10
of
1,595,276