Results 1 
6 of
6
Optimal Partitioning of Sequences
 IEEE TRANSACTIONS ON COMPUTERS
, 1995
"... The problem of partitioning a sequence of n real numbers into p intervals is considered. The goal is to find a partition such that the cost of the most expensive interval measured with a cost function f is minimized. An efficient algorithm which solves the problem in time O(p(n \Gamma p) log p) is d ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
The problem of partitioning a sequence of n real numbers into p intervals is considered. The goal is to find a partition such that the cost of the most expensive interval measured with a cost function f is minimized. An efficient algorithm which solves the problem in time O(p(n \Gamma p) log p) is developed. The algorithm is based on finding a sequence of feasible nonoptimal partitions, each having only one way it can be improved to get a better partition. Finally a number of related problems are considered and shown to be solvable by slight modiøcations of our main algorithm.
Direct Parallel Algorithms for Banded Linear Systems
, 1994
"... . We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We show that it is hard to beat ordinary oneprocessor Gaussian elimination. Numerical computation results from the Intel Paragon are given. 1. Introductio ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
. We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We show that it is hard to beat ordinary oneprocessor Gaussian elimination. Numerical computation results from the Intel Paragon are given. 1. Introduction In a project on divide and conquer algorithms in numerical linear algebra, the authors studied parallel algorithms to solve systems of linear equations and eigenvalue problems. The latter consisted in a study of the divide and conquer algorithm proposed by Cuppen [4] and stabilized by Sorensen and Tang [11]. This algorithm is evolving as the standard algorithm for solving the symmetric tridiagonal eigenvalue problem on sequential as on parallel computers. In [7], Gates and Arbenz report on the first successful parallel implementation of the algorithm. They observed almost optimal speedups on the Intel Paragon. The accuracy observed is as good as with any other known (fast) algorithm. The...
Minimization of the norm, the norm of the inverse and the condition number of a matrix by completion
, 1994
"... ..."
Developments and Trends in the Parallel Solution of Linear Systems
 Parallel Computing
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x and b are nvectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC  Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS, Toulouse,...
Algebraic Domain Decomposition
"... We discuss algebraic domain decomposition strategies for large sparse linear systems. This is done by use of the low rank modification formula due to Sherman, Morrison and Woodbury. Most part of this paper concentrates on the properties and treatment of the socalled coupling system, which arises f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We discuss algebraic domain decomposition strategies for large sparse linear systems. This is done by use of the low rank modification formula due to Sherman, Morrison and Woodbury. Most part of this paper concentrates on the properties and treatment of the socalled coupling system, which arises from the application of the low rank modification formula. A strategy to improve the properties is presented and the close relations to algebraic multigrid methods are shown. Notation Unless we need explicitely R or C , we will use the symbol F, which may be replaced by both, R or C , i.e. F 2 fR; Cg. We use the to denote the adjoint operation with respect to a given inner product (ffl; ffl). If nothing different is mentioned, we assume that the inner product is the standard inner product. In this case, is either the transposing operation T for the real case or the conjugate transposition operator H for the complex case. For any pair A; B of n \Theta n symmetric (Hermitian) mat...
Minimizing the Condition Number of a Positive Definite Matrix By Completion
, 1994
"... Introduction Let A be an n \Theta n positive definite Hermitian matrix (denoted by A ? 0), let be B a p \Theta n matrix and W (X) = A B H B X for a Hermitian X . (Here B H denotes the conjugate transpose of the matrix B.) We consider the optimization problem min X;W (X)?0 cond(W (X)); ( ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Introduction Let A be an n \Theta n positive definite Hermitian matrix (denoted by A ? 0), let be B a p \Theta n matrix and W (X) = A B H B X for a Hermitian X . (Here B H denotes the conjugate transpose of the matrix B.) We consider the optimization problem min X;W (X)?0 cond(W (X)); (1) where cond(W (X)) = kW (X)kkW (X) \Gamma1 k = max (W (X)) min (W (X)) (2) is the spectral con