Results 1  10
of
66
CommunicationEfficient Parallel Sorting
, 1996
"... We study the problem of sorting n numbers on a pprocessor bulksynchronous parallel (BSP) computer, which is a parallel multicomputer that allows for general processortoprocessor communication rounds provided each processor sends and receives at most h items in any round. We provide parallel sort ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
We study the problem of sorting n numbers on a pprocessor bulksynchronous parallel (BSP) computer, which is a parallel multicomputer that allows for general processortoprocessor communication rounds provided each processor sends and receives at most h items in any round. We provide parallel sorting methods that use internal computation time that is O( n log n p ) and a number of communication rounds that is O( log n log(h+1) ) for h = \Theta(n=p). The internal computation bound is optimal for any comparisonbased sorting algorithm. Moreover, the number of communication rounds is bounded by a constant for the (practical) situations when p n 1\Gamma1=c for a constant c 1. In fact, we show that our bound on the number of communication rounds is asymptotically optimal for the full range of values for p, for we show that just computing the "or" of n bits distributed evenly to the first O(n=h) of an arbitrary number of processors in a BSP computer requires\Omega\Gammaqui n= log(h...
Efficient parallel graph algorithms for coarse grained multicomputers and BSP (Extended Abstract)
 in Proc. 24th International Colloquium on Automata, Languages and Programming (ICALP'97
, 1997
"... In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulksynchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and s ..."
Abstract

Cited by 59 (23 self)
 Add to MetaCart
In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulksynchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition or open ear decomposition, (7) 2edge connectivity and biconnectivity (testing and component computation), and (8) cordal graph recognition (finding a perfect elimination ordering). The algorithms for Problems 17 require O(log p) communication rounds and linear sequential work per round. Our results for Problems 1 and 2, i.e.they are fully scalable, and for Problems hold for arbitrary ratios n p 38 it is assumed that n p,>0, which is true for all commercially
A Randomized Parallel 3D Convex Hull Algorithm For Coarse Grained Multicomputers
 In Proc. ACM Symp. on Parallel Algorithms and Architectures
, 1995
"... We present a randomized parallel algorithm for constructing the 3D convex hull on a generic pprocessor coarse grained multicomputer with arbitrary interconection network and n=p local memory per processor, where n=p p 2+ffl (for some arbitrarily small ffl ? 0). For any given set of n points in ..."
Abstract

Cited by 50 (11 self)
 Add to MetaCart
We present a randomized parallel algorithm for constructing the 3D convex hull on a generic pprocessor coarse grained multicomputer with arbitrary interconection network and n=p local memory per processor, where n=p p 2+ffl (for some arbitrarily small ffl ? 0). For any given set of n points in 3space, the algorithm computes the 3D convex hull, with high probaility, in O( n log n p ) local computation time and O(1) communication phases with at most O(n=p) data sent/received by each processor. That is, with high probability, the algorithm computes the 3D convex hull of an arbitrary point set in time O( n logn p + \Gamma n;p ), where \Gamma n;p denotes the time complexity of one communication phase. The assumption n p p 2+ffl implies a coarse grained, limited parallelism, model which is applicable to most commercially available multiprocessors. In the terminology of the BSP model, our algorithm requires, with high probability, O(1) supersteps, synchronization period L = \Th...
Efficient External Memory Algorithms by Simulating CoarseGrained Parallel Algorithms
, 2003
"... External memory (EM) algorithms are designed for largescale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
External memory (EM) algorithms are designed for largescale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to relate the large body of work on parallel algorithms to EM, but with limited success. The combination of EM computing, on multiple disks, with multiprocessor parallelism has been posted as a challenge by the ACMWorking Group on Storage I/O for LargeScale Computing.
Parallel Dynamic Programming for Solving the String Editing Problem on a CGM/BSP
, 2002
"... In this paper we present a coarsegrained parallel algorithm for solving the string edit distance problem for a string A and all substrings of a string C. Our method is based on a novel CGM/BSP parallel dynamic programming technique for computing all highest scoring paths in a weighted grid graph. T ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
In this paper we present a coarsegrained parallel algorithm for solving the string edit distance problem for a string A and all substrings of a string C. Our method is based on a novel CGM/BSP parallel dynamic programming technique for computing all highest scoring paths in a weighted grid graph. The algorithm requires log p rounds/supersteps and O( p log m) local computation, where p is the number of processors, p n. To our knowledge, this is the first efficient CGM/BSP algorithm for the alignment of all substrings of C with A. Furthermore, the CGM/BSP parallel dynamic programming technique presented is of interest in its own right and we expect it to lead to other parallel dynamic programming methods for the CGM/BSP.
Scalable Algorithms For Bichromatic Line Segment Intersection Problems On Coarse Grained Multicomputers
, 1993
"... We present outputsensitive scalable parallel algorithms for bichromatic line segment intersection problems for the coarse grained multicomputer model. Under the assumption that n p 2 , where n is the number of line segments and p the number of processors, we obtain an intersection counting algorith ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We present outputsensitive scalable parallel algorithms for bichromatic line segment intersection problems for the coarse grained multicomputer model. Under the assumption that n p 2 , where n is the number of line segments and p the number of processors, we obtain an intersection counting algorithm with a time complexity of O( n log n log p p + T s (n log p; p)), where T s (m; p) is the time used to sort m items on a p processor machine. The first term captures the time spent in sequential computation performed locally by each processor. The second term captures the interprocessor communication time. An additional O( k p ) time in sequential computation is spent on the reporting of the k intersections. As the sequential time complexity is O(n log n) for counting and an additional time O(k) for reporting, we obtain a speedup of p log p in the sequential part of the algorithm. The speedup in the communication part obviously depends on the underlying architecture. For example for a hypercube it ranges between p log 2 p and p log p depending on the ratio of n and p. As the reporting does not involve more interprocessor communication than the counting, the algorithm achieves a full speedup of p for k O(max(n log n log p; n log 3 p)) even on a hypercube.
CGMgraph/CGMlib: Implementing and Testing CGM Graph Algorithms on PC Clusters
 International Journal of High Performance Computing Applications
, 2003
"... In this paper, we present CGMgraph, the first integrated library of parallel graph methods for PCclu8(T9 based on CGM algo rithms. CGMgraph implements parallel methods for variou graph prob lems. Ou implementations of deterministic list ranking, Eu er tou con nected components, spanning forest, and ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
In this paper, we present CGMgraph, the first integrated library of parallel graph methods for PCclu8(T9 based on CGM algo rithms. CGMgraph implements parallel methods for variou graph prob lems. Ou implementations of deterministic list ranking, Eu er tou con nected components, spanning forest, and bipartite graph detection are, to ou r knowledge, the first e#cient implementations for PC clu sters.Ou library also inclu des CGMlib, a library of basic CGM tools su ch as sort ing, prefix su m, one to all broadcast, all to one gather, h Relation, all to all broadcast, array balancing, and CGM partitioning. Both libraries are available for download at http://cgm.dehne.net. 1
Practical Parallel Algorithms for Minimum Spanning Trees
 In Workshop on Advances in Parallel and Distributed Systems
, 1998
"... We study parallel algorithms for computing the minimum spanning tree of a weighted undirected graph G with n vertices and m edges. We consider an input graph G with m=n p, where p is the number of processors. For this case, we show that simple algorithms with dataindependent communication patterns ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We study parallel algorithms for computing the minimum spanning tree of a weighted undirected graph G with n vertices and m edges. We consider an input graph G with m=n p, where p is the number of processors. For this case, we show that simple algorithms with dataindependent communication patterns are efficient, both in theory and in practice. The algorithms are evaluated theoretically using Valiant's BSP model of parallel computation and empirically through implementation results.
A Parallel Wavefront Algorithm for Efficient Biological Sequence Comparison
 In The 2003 International Conference on Computational Science and its Applications
, 2003
"... In this paper we present a parallel wavefront algorithm for computing an alignment between two strings A and C, with A = m, and C = n. On a distributed memory parallel computer of p processors each with O((m + n)/p) memory, the proposed algorithm requires O(p) communication rounds and O(mn/p) lo ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
In this paper we present a parallel wavefront algorithm for computing an alignment between two strings A and C, with A = m, and C = n. On a distributed memory parallel computer of p processors each with O((m + n)/p) memory, the proposed algorithm requires O(p) communication rounds and O(mn/p) local computing time. The novelty of this algorithm is based on a compromise between the workload of each processor and the number of communication rounds required, expressed by a parameter called α. The proposed algorithm is expressed in terms of this parameter that can be tuned to obtain the best overall parallel time in a given implementation. We show very promising experimental results obtained on a 64node Beowulf machine. A characteristic of the wavefront communication requirement is that each processor communicates with few other processors. This makes it very suitable as a potential application for grid computing.
Randomized Parallel List Ranking For Distributed Memory Multiprocessors
, 1996
"... We present a randomized parallel list ranking algorithm for distributed memory multiprocessors, using a BSP like model. We first describe a simple version which requires, with high probability, log(3p) + log ln(n) = ~ O(logp+ log log n) communication rounds (hrelations with h = ~ O( n p )) and ~ O ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We present a randomized parallel list ranking algorithm for distributed memory multiprocessors, using a BSP like model. We first describe a simple version which requires, with high probability, log(3p) + log ln(n) = ~ O(logp+ log log n) communication rounds (hrelations with h = ~ O( n p )) and ~ O( n p ) local computation. We then outline an improved version which requires, with high probability, only r (4k + 6) log( 2 3 p) + 8 = ~ O(k log p) communication rounds where k = minfi 0j ln (i+1) n ( 2 3 p) 2i+1 g. Note that k ! ln (n) is an extremely small number. For n 10 10 100 and p 4, the value of k is at most 2. Hence, for a given number of processors, p, the number of communication rounds required is, for all practical purposes, independent of n. For n 1; 500; 000 and 4 p 2048, the number of communication rounds in our algorithm is bounded, with high probability, by 78, but the actual number of communication rounds observed so far is 25 in the worst case. Fo...