Results 1 
9 of
9
Analysis of multilevel graph partitioning
, 1995
"... Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multileve ..."
Abstract

Cited by 90 (14 self)
 Add to MetaCart
Recently, a number of researchers have investigated a class of algorithms that are based on multilevel graph partitioning that have moderate computational complexity, and provide excellent graph partitions. However, there exists little theoretical analysis that could explain the ability of multilevel algorithms to produce good partitions. In this paper we present such an analysis. We show under certain reasonable assumptions that even if no refinement is used in the uncoarsening phase, a good bisection of the coarser graph is worse than a good bisection of the finer graph by at most a small factor. We also show that the size of a good vertexseparator of the coarse graph projected to the finer graph (without performing refinement in the uncoarsening phase) is higher than the size of a good vertexseparator of the finer graph by at most a small factor.
Graph partitioning for high performance scientific simulations. Computing Reviews 45(2
, 2004
"... ..."
Robust Ordering of Sparse Matrices using Multisection
 Department of Computer Science, York University
, 1996
"... In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
In this paper we provide a robust reordering scheme for sparse matrices. The scheme relies on the notion of multisection, a generalization of bisection. The reordering strategy is demonstrated to have consistently good performance in terms of fill reduction when compared with multiple minimum degree and generalized nested dissection. Experimental results show that by using multisection, we obtain an ordering which is consistently as good as or better than both for a wide spectrum of sparse problems. 1 Introduction It is well recognized that finding a fillreducing ordering is crucial in the success of the numerical solution of sparse linear systems. For symmetric positivedefinite systems, the minimum degree [38] and the nested dissection [11] orderings are perhaps the most popular ordering schemes. They represent two opposite approaches to the ordering problem. However, they share a common undesirable characteristic. Both schemes produce generally good orderings, but the ordering qua...
A CoarseGrain Parallel Formulation of Multilevel kway Graph Partitioning Algorithm
 PARALLEL PROCESSING FOR SCIENTIFIC COMPUTING. SIAM
, 1997
"... In this paper we present a parallel formulation of a multilevel kway graph partitioning algorithm, that is particularly suited for messagepassing libraries that have high latency. The multilevel kway partitioning algorithm reduces the size of the graph by successively collapsing vertices and edge ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
In this paper we present a parallel formulation of a multilevel kway graph partitioning algorithm, that is particularly suited for messagepassing libraries that have high latency. The multilevel kway partitioning algorithm reduces the size of the graph by successively collapsing vertices and edges (coarsening phase), finds a kway partitioning of the smaller graph, and then it constructs a kway partitioning for the original graph by projecting and refining the partition to successively finer graphs (uncoarsening phase). Our algorithm is able to achieve a high degree of concurrency, while maintaining the high quality partitions produced by the serial algorithm.
Developments and Trends in the Parallel Solution of Linear Systems
 Parallel Computing
, 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x and b are nvectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC  Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS, Toulouse,...
The impact of high performance Computing in the solution of linear systems: trends and problems
, 1999
"... We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in thi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in this area and speculate on what advances we might expect in the early years of the next century. Keywords: sparse matrices, direct methods, parallelism, matrix factorization, multifrontal methods. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. Also appeared as Technical Report RALTR1999072 from Rutherford Appleton Laboratory, Oxfordshire. 2 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. Rutherford Appleton Laboratory. Contents 1 Introduction 1 2 Building blocks 1 3 Factorization of dense matrices 2 4 Factorization of sparse matrices 4 5 Parallel computation 8 6 Current situation 12 7 F...
The User Manual for SPOOLES: Release 2.0: An Object Oriented Software Library for Solving Sparse Linear Systems of Equations
, 1998
"... Solving sparse linear systems of equations is a common and important component of a multitude of scientific and engineering applications. The SPOOLES software package 1 provides this functionality with a collection of software objects and methods. The library provides the user various options to a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Solving sparse linear systems of equations is a common and important component of a multitude of scientific and engineering applications. The SPOOLES software package 1 provides this functionality with a collection of software objects and methods. The library provides the user various options to assemble the sparse linear system and to order the system for sparsity preservation. The user can select numerical factorization options such as pivoting for numerical stability and a drop tolerance incomplete factorization. This package can be used for applications where linear systems of the form A+ oeB need to solved for various values of oe. A QR factorization capability for full rank overdetermined systems is included. The library is written in ANSI C using object oriented design. Data is contained in objects. Each object has several methods to enter data into, extract data from, and to perform work on the data in the objects. This release of the library contains serial factorization a...