Results 1  10
of
14
Analysis of multilevel graph partitioning
, 1995
"... Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multileve ..."
Abstract

Cited by 104 (13 self)
 Add to MetaCart
Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multilevel algorithms to produce good partitions. In this paper we present such an analysis. We show under certain reasonable assumptions that even if no refinement is used in the uncoarsening phase, a good bisection of the coarser graph is worse than a good bisection of the finer graph by at most a small factor. We also show that the size of a good vertexseparator of the coarse graph projected to the finer graph (without performing refinement in the uncoarsening phase) is higher than the size of a good vertexseparator of the finer graph by at most a small factor.
A Parallel Algorithm for Multilevel Graph Partitioning and Sparse Matrix Ordering
, 1996
"... ..."
(Show Context)
Graph partitioning for high performance scientific simulations. Computing Reviews 45(2
, 2004
"... ..."
(Show Context)
Robust Ordering of Sparse Matrices using Multisection
 Department of Computer Science, York University
, 1996
"... In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
(Show Context)
In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree and generalized nested dissection. Experimental results show that by using multisection, we obtain an ordering which is consistently as good as or better than both for a wide spectrum of sparse problems. 1 Introduction It is well recognized that finding a fillreducing ordering is crucial in the success of the numerical solution of sparse linear systems. For symmetric positivedefinite systems, the minimum degree [38] and the nested dissection [11] orderings are perhaps the most popular ordering schemes. They represent two opposite approaches to the ordering problem. However, they share a common undesirable characteristic. Both schemes produce generally good orderings, but the ordering qua...
A CoarseGrain Parallel Formulation of Multilevel kway Graph Partitioning Algorithm
 PARALLEL PROCESSING FOR SCIENTIFIC COMPUTING. SIAM
, 1997
"... In this paper we present a parallel formulation of a multilevel kway graph partitioning algorithm, that is particularly suited for messagepassing libraries that have high latency. The multilevel kway partitioning algorithm reduces the size of the graph by successively collapsing vertices and edge ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
In this paper we present a parallel formulation of a multilevel kway graph partitioning algorithm, that is particularly suited for messagepassing libraries that have high latency. The multilevel kway partitioning algorithm reduces the size of the graph by successively collapsing vertices and edges (coarsening phase), finds a kway partitioning of the smaller graph, and then it constructs a kway partitioning for the original graph by projecting and refining the partition to successively finer graphs (uncoarsening phase). Our algorithm is able to achieve a high degree of concurrency, while maintaining the high quality partitions produced by the serial algorithm.
The impact of high performance Computing in the solution of linear systems: trends and problems
, 1999
"... We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in thi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in this area and speculate on what advances we might expect in the early years of the next century. Keywords: sparse matrices, direct methods, parallelism, matrix factorization, multifrontal methods. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. Also appeared as Technical Report RALTR1999072 from Rutherford Appleton Laboratory, Oxfordshire. 2 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. Rutherford Appleton Laboratory. Contents 1 Introduction 1 2 Building blocks 1 3 Factorization of dense matrices 2 4 Factorization of sparse matrices 4 5 Parallel computation 8 6 Current situation 12 7 F...
Developments and Trends in the Parallel Solution of Linear Systems
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field.
The user manual for SPOOLES, Release 2.0: An Object Oriented Software Library for solving sparse linear systems of equations
, 1998
"... ..."
Largescale normal coordinate analysis on distributed memory parallel systems, technical report
 Edmond Chow, Steve Lee, Panayot Vassilevski, Carol Woodward Carnegie Mellon University
"... A parallel computational scheme for analyzing largescale molecular vibration on distributed memory computing platforms is presented in this paper. This method combines the implicitly restarted Lanczos algorithm with a stateofart parallel sparse direct solver to compute a set of low frequency vi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A parallel computational scheme for analyzing largescale molecular vibration on distributed memory computing platforms is presented in this paper. This method combines the implicitly restarted Lanczos algorithm with a stateofart parallel sparse direct solver to compute a set of low frequency vibrational modes for molecular systems containing tens of thousands of atoms. Although the original motivation for developing such a scheme was to overcome memory limitations on traditional sequential and shared memory machines, our computational experiments show that with a careful parallel design and data partitioning scheme one can achieve scalable performance on lightly coupled distributed memory parallel systems. In particular, we demonstrate performance enhancement achieved by using the latency tolerant “selective inversion " scheme in the sparse triangular substitution phase of the computation. 1