Results 1 
5 of
5
Design and Implementation of a Practical Parallel Delaunay Algorithm
, 1999
"... This paper describes the design and implementation of a practical parallel algorithm for Delaunay triangulation that works well on general distributions. Although there have been many theoretical parallel algorithms for the problem, and some implementations based on bucketing that work well for unif ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
This paper describes the design and implementation of a practical parallel algorithm for Delaunay triangulation that works well on general distributions. Although there have been many theoretical parallel algorithms for the problem, and some implementations based on bucketing that work well for uniform distributions, there has been little work on implementations for general distributions. We use the well known reduction of 2D Delaunay triangulation to find the 3D convex hull of points on a paraboloid. Based on this reduction we developed a variant of the Edelsbrunner and Shi 3D convex hull algorithm, specialized for the case when the point set lies on a paraboloid. This simplification reduces the work required by the algorithm (number of operations) from O(n log^2 n) to O(n log n). The depth (parallel time) is O(log^3 n) on a CREW PRAM. The algorithm is simpler than previous O(n log n) work parallel algorithms leading to smaller constants. Initial experiments using a variety of distributions showed that our parallel algorithm was within a factor of 2 in work from the best sequential algorithm. Based on these promising results, the algorithm was implemented using C and an MPIbased toolkit. Compared with previous work, the resulting implementation achieves significantly better speedups over good sequential code, does not assume a uniform distribution of points, and is widely portable due to its use of MPI as a communication mechanism. Results are presented for the IBM SP2, Cray T3D, SGI Power Challenge, and DEC AlphaCluster.
Practical Parallel DivideandConquer Algorithms
, 1997
"... Nested data parallelism has been shown to be an important feature of parallel languages, allowing the concise expression of algorithms that operate on irregular data structures such as graphs and sparse matrices. However, previous nested dataparallel languages have relied on a vector PRAM impleme ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Nested data parallelism has been shown to be an important feature of parallel languages, allowing the concise expression of algorithms that operate on irregular data structures such as graphs and sparse matrices. However, previous nested dataparallel languages have relied on a vector PRAM implementation layer that cannot be efficiently mapped to MPPs with high interprocessor latency. This thesis shows that by restricting the problem set to that of dataparallel divideandconquer algorithms I can maintain the expressibility of full nested dataparallel languages while achieving good efficiency on current distributedmemory machines. Specifically, I define
Early Applications in the MessagePassing Interface (MPI)
 The International Journal of Supercomputer Applications
, 1994
"... We describe a number of early efforts to make use of the Message Passing Interface (MPI) standard in applications, based on an informal survey conducted in MayJune, 1994. Rather than a definitive statement of all MPI development work, this paper addresses initial successes, progress, and impression ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We describe a number of early efforts to make use of the Message Passing Interface (MPI) standard in applications, based on an informal survey conducted in MayJune, 1994. Rather than a definitive statement of all MPI development work, this paper addresses initial successes, progress, and impressions that application developers have with MPI, according to the responses received. We summarize the important aspects of each survey response, and draw conclusions about the spread of MPI into applications. An understanding of messagepassing, and access to the MPI standard are prerequisites for appreciating this paper. Some background material is provided to ease this requirement. Skjellum, et al. Early MPI: : : 3 1 Introduction In this paper, we describe a number of early efforts to make use of the Message Passing Interface (MPI) standard in real applications (Forum 1994a; Forum 1994b). An informal survey of efforts is reported here, together with our commentary. We summarize the respon...
Early Applications in the MessagePassing Interface (MPI)
, 1995
"... We describe a number of early efforts to make use of the Message Passing Interface (MPI) standard in applications, based on an informal survey conducted in MayJune, 1994. Rather than a definitive statement of all MPI developmentwork, this paper addresses initial successes, progress, and impressi ..."
Abstract
 Add to MetaCart
We describe a number of early efforts to make use of the Message Passing Interface (MPI) standard in applications, based on an informal survey conducted in MayJune, 1994. Rather than a definitive statement of all MPI developmentwork, this paper addresses initial successes, progress, and impressions that application developers have with MPI, according to the responses received. We summarize the important aspects of each survey response, and draw conclusions about the spread of MPI into applications. An understanding of messagepassing, and access to the MPI standard are prerequisites for appreciating this paper. Some background material is provided to ease this requirement. Skjellum, et al. Early MPI::: 3 1
PVM and MPI Are Completely Different
, 1997
"... PVM and MPI are often compared. These comparisons usually start with the unspoken assumption that PVM and MPI represent different solutions to the same problem. In this paper we show that, in fact, the two systems often are solving different problems. In cases where the problems do match but the sol ..."
Abstract
 Add to MetaCart
PVM and MPI are often compared. These comparisons usually start with the unspoken assumption that PVM and MPI represent different solutions to the same problem. In this paper we show that, in fact, the two systems often are solving different problems. In cases where the problems do match but the solutions chosen by PVM and MPI are different, we explain the reasons for the differences. Usually such differences can be traced to explicit differences in the goals of the two systems, their origins, or the relationship between their specifications and their implementations. For example, we show that the requirement for portability and performance across many platforms caused MPI to chose different approaches than PVM, which is able to exploit the similarities of networkconnected systems. This paper expands on earlier discussions; among the additions are parallel I/O, the safety of contexts, and a subtle performance issue in multiparty communications.