Results 1  10
of
34
Geometric Mesh Partitioning: Implementation and Experiments
"... We investigate a method of dividing an irregular mesh into equalsized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain ..."
Abstract

Cited by 102 (19 self)
 Add to MetaCart
We investigate a method of dividing an irregular mesh into equalsized pieces with few interconnecting edges. The method’s novel feature is that it exploits the geometric coordinates of the mesh vertices. It is based on theoretical work of Miller, Teng, Thurston, and Vavasis, who showed that certain classes of “wellshaped” finite element meshes have good separators. The geometric method is quite simple to implement: we describe a Matlab code for it in some detail. The method is also quite efficient and effective: we compare it with some other methods, including spectral bisection.
Mapping Algorithms and Software Environment for Data Parallel PDE . . .
 JOURNAL OF DISTRIBUTED AND PARALLEL COMPUTING
, 1994
"... We consider computations associated with data parallel iterative solvers used for the numerical solution of Partial Differential Equations (PDEs). The mapping of such computations into load balanced tasks requiring minimum synchronization and communication is a difficult combinatorial optimization p ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We consider computations associated with data parallel iterative solvers used for the numerical solution of Partial Differential Equations (PDEs). The mapping of such computations into load balanced tasks requiring minimum synchronization and communication is a difficult combinatorial optimization problem. Its optimal solution is essential for the efficient parallel processing of PDE computations. Determining data mappings that optimize a number of criteria, likeworkload balance, synchronization and local communication, often involves the solution of an NPComplete problem. Although data mapping algorithms have been known for a few years there is lack of qualitative and quantitative comparisons based on the actual performance of the parallel computation. In this paper we present two new data mapping algorithms and evaluate them together with a large number of existing ones using the actual performance of data parallel iterative PDE solvers on the nCUBE II. Comparisons on the performance of data parallel iterative PDE solvers on medium and large scale problems demonstrate that some computationally inexpensive data block partitioning algorithms are as effective as the computationally expensive deterministic optimization algorithms. Also, these comparisons demonstrate that the existing approach in solving the data partitioning problem is inefficient for large scale problems. Finally, a software environment for the solution of the partitioning problem of data parallel iterative solvers is presented.
Towards a tighter coupling of bottomup and topdown sparse matrix ordering methods
 BIT
, 2001
"... Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our meth ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our methodology vertex separators are interpreted as the boundaries of the remaining elements in an unfinished bottomup ordering. As a consequence, we are using bottomup techniques such as quotient graphs and special node selection strategies for the construction of vertex separators. Once all separators have been found, we are using them as a skeleton for the computation of several bottomup orderings. Experimental results show that the orderings obtained by our scheme are in general better than those obtained by other popular ordering codes.
A Cartesian Parallel Nested Dissection Algorithm
, 1994
"... This paper is concerned with the distri uted parallel computation of an ordering for a symmetric positive de nite sparse matrix. The purpose of the ordering is to limit ll and enhance concurrency in the su se uent computation of the Cholesky factori ation of the matrix. We use a geometric approach t ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
This paper is concerned with the distri uted parallel computation of an ordering for a symmetric positive de nite sparse matrix. The purpose of the ordering is to limit ll and enhance concurrency in the su se uent computation of the Cholesky factori ation of the matrix. We use a geometric approach to nested dissection ased on a given Cartesian em edding of the graph of the matrix in Euclidean space. The resulting algorithm can e implemented e ciently on massively parallel, distri uted memory computers. ne unusual feature of the distri uted algorithm is that its effectiveness does not depend strongly on data locality, which is critical in this context, since an appropriate partitioning of the pro lem is not known until after the ordering has een determined. The ordering algorithm is the rst component in a suite of scala le parallel algorithms currently under development for solving large sparse linear systems on massively parallel computers.
Geometric separators for finiteelement meshes
 SIAM J. Sci. Comput
, 1998
"... Abstract. We propose a class of graphs that would occur naturally in finiteelement and finitedifference problems and we prove a bound on separators for this class of graphs. Graphs in this class are embedded in ddimensional space in a certain manner. For ddimensional graphs our separator bound is ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Abstract. We propose a class of graphs that would occur naturally in finiteelement and finitedifference problems and we prove a bound on separators for this class of graphs. Graphs in this class are embedded in ddimensional space in a certain manner. For ddimensional graphs our separator bound is O(n (d−1)/d), which is the best possible bound. We also propose a simple randomized algorithm to find this separator in O(n) time. This separator algorithm can be used to partition the mesh among processors of a parallel computer and can also be used for the nested dissection sparse elimination algorithm.
Sparse Matrix Ordering Methods for Interior Point Linear Programming
 Linear Programming, INFORMS Journal on Computing
, 1996
"... The main cost of solving a linear programming problem using an interior point method is usually the cost of solving a series of sparse, symmetric linear systems of equations, A\ThetaA T x = b. These systems are typically solved using a sparse direct method. The first step in such a method is a reo ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
The main cost of solving a linear programming problem using an interior point method is usually the cost of solving a series of sparse, symmetric linear systems of equations, A\ThetaA T x = b. These systems are typically solved using a sparse direct method. The first step in such a method is a reordering of the rows and columns of the matrix to reduce fill in the factor and/or reduce the required work. This paper evaluates several methods for performing fillreducing ordering on a variety of largescale linear programming problems. We find that a new method, based on the nested dissection heuristic, provides significantly better orderings than the most commonly used ordering method, minimum degree. 1 Introduction An interior point method solves a linear programming problem by computing a sequence of direction vectors. At each iteration, the method takes a step in the computed direction, moving closer to the optimal solution. The details of the interior point method are not relevant ...
Geometric Spectral Partitioning
, 1995
"... We investigate a new method for partitioning a graph into two equalsized pieces with few connecting edges. We combine ideas from two recently suggested partitioning algorithms, spectral bisection (which uses an eigenvector of a matrix associated with the graph) and geometric bisection (which applie ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We investigate a new method for partitioning a graph into two equalsized pieces with few connecting edges. We combine ideas from two recently suggested partitioning algorithms, spectral bisection (which uses an eigenvector of a matrix associated with the graph) and geometric bisection (which applies to graphs that are meshes in Euclidean space). The new method does not require geometric coordinates, and it produces partitions that are often better than either the spectral or geometric ones.
Performance Of Greedy Ordering Heuristics For Sparse Cholesky Factorization
, 1997
"... . Greedy algorithms for ordering sparse matrices for Cholesky factorization can be based on different metrics. Minimum degree, a popular and effective greedy ordering scheme, minimizes the number of nonzero entries in the rank1 update (degree) at each step of the factorization. Alternatively, minim ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
. Greedy algorithms for ordering sparse matrices for Cholesky factorization can be based on different metrics. Minimum degree, a popular and effective greedy ordering scheme, minimizes the number of nonzero entries in the rank1 update (degree) at each step of the factorization. Alternatively, minimum deficiency minimizes the number of nonzero entries introduced (deficiency) at each step of the factorization. In this paper we develop two new heuristics: "modified minimum deficiency" (MMDF) and "modified multiple minimum degree" (MMMD). The former uses a metric similar to deficiency while the latter uses a degreelike metric. Our experiments reveal that on the average MMDF has 21% fewer operations to factor than minimum degree; MMMD has 15% fewer operations to factor than minimum degree. MMMD is no more expensive to compute than minimum degree while MMDF requires on the average 30% more time than minimum degree. Key words. sparse matrix ordering, minimum degree, minimum deficiency, gre...
Parallel Ordering Using Edge Contraction
 PARALLEL COMPUTING
, 1995
"... Computing a fillreducing ordering of a sparse matrix is a central problem in the solution of sparse linear systems using direct methods. In recent years, there has been significant research in developing a sparse direct solver suitable for messagepassing multiprocessors. However, computing the ord ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Computing a fillreducing ordering of a sparse matrix is a central problem in the solution of sparse linear systems using direct methods. In recent years, there has been significant research in developing a sparse direct solver suitable for messagepassing multiprocessors. However, computing the ordering step in parallel remains a challenge and there are very few methods available. This paper describes a new scheme called parallel contracted ordering which is a combination of a new parallel nested dissection heuristic and any serial ordering method. The new nested dissection heuristic called ShrinkSplit ND (SSND) is based on parallel graph contraction. For a system with N unknowns, the complexity of SSND is O( N P log P ) using P processors in a hypercube; the overall complexity is O( N P log N) when the serial ordering method chosen is graph exploration based nested dissection. We provide extensive empirical results on the quality of the ordering. We also report on the parallel...
Solving Large Nonsymmetric Sparse Linear Systems Using MCSPARSE
 PARALLEL COMPUTING
, 1996
"... ..."