Results 1  10
of
12
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 671 (25 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
Optimal Partitioning of Sequences
 IEEE TRANSACTIONS ON COMPUTERS
, 1995
"... The problem of partitioning a sequence of n real numbers into p intervals is considered. The goal is to find a partition such that the cost of the most expensive interval measured with a cost function f is minimized. An efficient algorithm which solves the problem in time O(p(n \Gamma p) log p) is d ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
(Show Context)
The problem of partitioning a sequence of n real numbers into p intervals is considered. The goal is to find a partition such that the cost of the most expensive interval measured with a cost function f is minimized. An efficient algorithm which solves the problem in time O(p(n \Gamma p) log p) is developed. The algorithm is based on finding a sequence of feasible nonoptimal partitions, each having only one way it can be improved to get a better partition. Finally a number of related problems are considered and shown to be solvable by slight modiøcations of our main algorithm.
Direct Parallel Algorithms for Banded Linear Systems
, 1994
"... . We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We show that it is hard to beat ordinary oneprocessor Gaussian elimination. Numerical computation results from the Intel Paragon are given. 1. Introductio ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
. We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We show that it is hard to beat ordinary oneprocessor Gaussian elimination. Numerical computation results from the Intel Paragon are given. 1. Introduction In a project on divide and conquer algorithms in numerical linear algebra, the authors studied parallel algorithms to solve systems of linear equations and eigenvalue problems. The latter consisted in a study of the divide and conquer algorithm proposed by Cuppen [4] and stabilized by Sorensen and Tang [11]. This algorithm is evolving as the standard algorithm for solving the symmetric tridiagonal eigenvalue problem on sequential as on parallel computers. In [7], Gates and Arbenz report on the first successful parallel implementation of the algorithm. They observed almost optimal speedups on the Intel Paragon. The accuracy observed is as good as with any other known (fast) algorithm. The...
Minimization of the norm, the norm of the inverse and the condition number of a matrix by completion
, 1994
"... ..."
Developments and Trends in the Parallel Solution of Linear Systems
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field.
Mehrmann: Minimizing the condition number of a positive definite matrix by completion, preprint
, 1993
"... ..."
Algebraic Domain Decomposition
"... We discuss algebraic domain decomposition strategies for large sparse linear systems. This is done by use of the low rank modification formula due to Sherman, Morrison and Woodbury. Most part of this paper concentrates on the properties and treatment of the socalled coupling system, which arises f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We discuss algebraic domain decomposition strategies for large sparse linear systems. This is done by use of the low rank modification formula due to Sherman, Morrison and Woodbury. Most part of this paper concentrates on the properties and treatment of the socalled coupling system, which arises from the application of the low rank modification formula. A strategy to improve the properties is presented and the close relations to algebraic multigrid methods are shown. Notation Unless we need explicitely R or C , we will use the symbol F, which may be replaced by both, R or C , i.e. F 2 fR; Cg. We use the to denote the adjoint operation with respect to a given inner product (ffl; ffl). If nothing different is mentioned, we assume that the inner product is the standard inner product. In this case, is either the transposing operation T for the real case or the conjugate transposition operator H for the complex case. For any pair A; B of n \Theta n symmetric (Hermitian) mat...
Parallel Turbulence Simulation based on MPI
 IN HIGHPERFORMANCE COMPUTING AND NETWORKING. INTERNATIONAL CONFERENCE AND EXHIBITION HPCN EUROPE
, 1996
"... We describe a parallel implementation for largeeddy simulation and direct numerical simulation of turbulent fluids based on the three dimensional incompressible NavierStokes equation. The parallelization strategy is specified by domain decomposition and a divide & conquer method for solving th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We describe a parallel implementation for largeeddy simulation and direct numerical simulation of turbulent fluids based on the three dimensional incompressible NavierStokes equation. The parallelization strategy is specified by domain decomposition and a divide & conquer method for solving the poisson equation. The program is benchmarked on a set of supercomputers under the message passing platform MPI. Running times of these tests are presented.
Objectives Cluster solution of block tridiagonal systems
"... In order to exploit the capacities of cluster computing in relatively small numerical problems, we compare the performance of parallel algorithms for the solution of block tridiagonal linear systems, one based on cyclic reduction and the other on the divide and conquer paradigm. ..."
Abstract
 Add to MetaCart
(Show Context)
In order to exploit the capacities of cluster computing in relatively small numerical problems, we compare the performance of parallel algorithms for the solution of block tridiagonal linear systems, one based on cyclic reduction and the other on the divide and conquer paradigm.