Results 11  20
of
55
An Empirical Analysis of Algorithms for Constructing a Minimum Spanning Tree
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1991
"... We compare algorithms for the construction of a minimum spanning tree through largescale experimentation on randomly generated graphs of different structures and different densities. In order to extrapolate with confidence, we use graphs with up to 130,000 nodes (sparse) or 750,000 edges (dense). A ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We compare algorithms for the construction of a minimum spanning tree through largescale experimentation on randomly generated graphs of different structures and different densities. In order to extrapolate with confidence, we use graphs with up to 130,000 nodes (sparse) or 750,000 edges (dense). Algorithms included in our experiments are Prim's algorithm (implemented with a variety of priority queues), Kruskal's algorithm (using presorting or demand sorting), Cheriton and Tarjan's algorithm, and Fredman and Tarjan 's algorithm. We also ran a large variety of tests to investigate lowlevel implementation decisions for the data structures, as well as to enable us to eliminate the effect of compilers and architectures. Within the range of sizes used, Prim's algorithm, using pairing heaps or sometimes binary heaps, is clearly preferable. While versions of Prim's algorithm using efficient implementations of Fibonacci heaps or rankrelaxed heaps often approach and (on the densest graphs) so...
A Simple Parallel Algorithm for the SingleSource Shortest Path Problem on Planar Digraphs
 OF LNCS
, 1996
"... We present a simple parallel algorithm for the singlesource shortest path problem in planar digraphs with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in O((n 2ffl +n 1\Gammaffl ) log n) time, performing O(n 1+ffl log n) work for any 0 ! f ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We present a simple parallel algorithm for the singlesource shortest path problem in planar digraphs with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in O((n 2ffl +n 1\Gammaffl ) log n) time, performing O(n 1+ffl log n) work for any 0 ! ffl ! 1=2. The strength of the algorithm is its simplicity, making it easy to implement, and presumably quite efficient in practice. The algorithm improves upon the work of all previous algorithms. The work can be further reduced to O(n 1+ffl ), by plugging in a less practical, sequential planar shortest path algorithm. Our algorithm is based on a region decomposition of the input graph, and uses a wellknown parallel implementation of Dijkstra's algorithm.
A Parallel Priority Queue with Constant Time Operations
 JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
, 1998
"... We present a parallel priority queue that supports the following operations in constant time: parallel insertion of a sequence of elements ordered according to key, parallel decrease key for a sequence of elements ordered according to key, deletion of the minimum key element, as well as deletion ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We present a parallel priority queue that supports the following operations in constant time: parallel insertion of a sequence of elements ordered according to key, parallel decrease key for a sequence of elements ordered according to key, deletion of the minimum key element, as well as deletion of an arbitrary element. Our data structure is the first to support multi insertion and multi decrease key in constant time. The priority queue can be implemented on the EREW PRAM, and can perform any sequence of n operations in O(n) time and O(m log n) work, m being the total number of keys inserted and/or updated. A main application is a parallel implementation of Dijkstra's algorithm for the singlesource shortest path problem, which runs in O(n) time and O(m log n) work on a CREW PRAM on graphs with n vertices and m edges. This is a logarithmic factor improvement in the running time compared with previous approaches.
Twotier relaxed heaps
 Proceedings of the 17th International Symposium on Algorithms and Computation, Lecture Notes in Computer Science 4288, SpringerVerlag
, 2006
"... Abstract. We introduce an adaptation of runrelaxed heaps which provides efficient heap operations with respect to the number of element comparisons performed. Our data structure guarantees the worstcase cost of O(1) for findmin, insert, and decrease; and the worstcase cost of O(lg n) with at mos ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Abstract. We introduce an adaptation of runrelaxed heaps which provides efficient heap operations with respect to the number of element comparisons performed. Our data structure guarantees the worstcase cost of O(1) for findmin, insert, and decrease; and the worstcase cost of O(lg n) with at most lg n + 3 lg lg n + O(1) element comparisons for delete, improving the bound of 3lg n + O(1) on the number of element comparisons known for runrelaxed heaps. Here, n denotes the number of elements stored prior to the operation in question, and lg n equals max {1, log 2 n}. 1
Randomized Priority Queues for Fast Parallel Access
 Journal of Parallel and Distributed Computing
, 1997
"... Applications like parallel search or discrete event simulation often assign priority or importance to pieces of work. An effective way to exploit this for parallelization is to use a priority queue data structure for scheduling the work; but a bottleneck free implementation of parallel priority ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Applications like parallel search or discrete event simulation often assign priority or importance to pieces of work. An effective way to exploit this for parallelization is to use a priority queue data structure for scheduling the work; but a bottleneck free implementation of parallel priority queue access by many processors is required to make this approach scalable. We present simple and portable randomized algorithms for parallel priority queues on distributed memory machines with fully distributed storage. Accessing O(n) out of m elements on an nprocessor network with diameter d requires amortized time O with high probability for many network types. On logarithmic diameter networks, the algorithms are as fast as the best previously known EREWPRAM methods. Implementations demonstrate that the approach is already useful for medium scale parallelism.
An experimental study of a parallel shortest path algorithm for solving largescale graph instances
 Ninth Workshop on Algorithm Engineering and Experiments (ALENEX 2007)
, 2007
"... We present an experimental study of the single source shortest path problem with nonnegative edge weights (NSSP) on largescale graphs using the $\Delta$stepping parallel algorithm. We report performance results on the Cray MTA2, a multithreaded parallel computer. The MTA2 is a highend shared m ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We present an experimental study of the single source shortest path problem with nonnegative edge weights (NSSP) on largescale graphs using the $\Delta$stepping parallel algorithm. We report performance results on the Cray MTA2, a multithreaded parallel computer. The MTA2 is a highend shared memory system offering two unique features that aid the efficient parallel implementation of irregular algorithms: the ability to exploit finegrained parallelism, and lowoverhead synchronization primitives. Our implementation exhibits remarkable parallel speedup when compared with competitive sequential algorithms, for lowdiameter sparse graphs. For instance, $\Delta$stepping on a directed scalefree graph of 100 million vertices and 1 billion edges takes less than ten seconds on 40 processors of the MTA2, with a relative speedup of close to 30. To our knowledge, these are the first performance results of a shortest path problem on realistic graph instances in the order of billions of vertices and edges.
Fast Meldable Priority Queues
, 1995
"... We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations wh ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations where Meld takes worst case time o(n).
KD Trees Are Better when Cut on the Longest Side
 In LNCS 1879, ESA 2000
, 2000
"... Abstract. We show that a popular variant of the well known kd tree data structure satisfies an important packing lemma. This variant is a binary spatial partitioning tree T defined on a set of n points in IR d, for fixed d ≥ 1, using the simple rule of splitting each node’s hyperrectangular region ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract. We show that a popular variant of the well known kd tree data structure satisfies an important packing lemma. This variant is a binary spatial partitioning tree T defined on a set of n points in IR d, for fixed d ≥ 1, using the simple rule of splitting each node’s hyperrectangular region with a hyperplane that cuts the longest side. An interesting consequence of the packing lemma is that standard algorithms for performing approximate nearestneighbor searching or range searchingqueriesvisitatmostO(log d−1 n)nodesofsuchatreeT in the worst case. Traditionally, many variants of kd trees have been empirically shown to exhibit polylogarithmic performance, and under certain restrictions in the data distribution some theoretical expected case results have been proven. This result, however, is the first one proving a worstcase polylogarithmic time bound for approximate geometric queries using the simple kd tree data structure. 1
Parallel Shortest Path Algorithms for Solving . . .
, 2006
"... We present an experimental study of the single source shortest path problem with nonnegative edge weights (NSSP) on largescale graphs using the ∆stepping parallel algorithm. We report performance results on the Cray MTA2, a multithreaded parallel computer. The MTA2 is a highend shared memory s ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We present an experimental study of the single source shortest path problem with nonnegative edge weights (NSSP) on largescale graphs using the ∆stepping parallel algorithm. We report performance results on the Cray MTA2, a multithreaded parallel computer. The MTA2 is a highend shared memory system offering two unique features that aid the efficient parallel implementation of irregular algorithms: the ability to exploit finegrained parallelism, and lowoverhead synchronization primitives. Our implementation exhibits remarkable parallel speedup when compared with competitive sequential algorithms, for lowdiameter sparse graphs. For instance, ∆stepping on a directed scalefree graph of 100 million vertices and 1 billion edges takes less than ten seconds on 40 processors of the MTA2, with a relative speedup of close to 30. To our knowledge, these are the first performance results of a shortest path problem on realistic graph instances in the order of billions of vertices and edges.
A General Technique for Implementation of Efficient Priority Queues
 In Proc. 3rd Israel Symposium on Theory of Computing and Systems
, 1994
"... This paper presents a very general technique for the implementation of mergeable priority queues. The amortized running time is O(log n) for DeleteMin and Delete, and \Theta(1) for all other standard operations. In particular, the operation DecreaseKey runs in amortized constant time. The worstca ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper presents a very general technique for the implementation of mergeable priority queues. The amortized running time is O(log n) for DeleteMin and Delete, and \Theta(1) for all other standard operations. In particular, the operation DecreaseKey runs in amortized constant time. The worstcase running time is O(logn) or better for all operations. Several examples of mergeable priority queues are given. The examples include priority queues that are particular well suited for extenal storage. The space requirement is only two pointers and one information field per item. The technique is also used to implement mergeable, doubleended priority queues. For these queues, the worstcase time bound for insertion is \Theta(1), which improves the best previously known bound. For the other operations, the time bounds are the same as the best previously known bounds, worstcase as well as amortized. 1 Introduction A mergeable priority queue is one of the fundamental data types. It is used...