Results 1 - 10
of
24
An optimal minimum spanning tree algorithm
- J. ACM
, 2000
"... Abstract. We establish that the algorithmic complexity of the minimum spanning tree problem is equal to its decision-tree complexity. Specifically, we present a deterministic algorithm to find a minimum spanning tree of a graph with n vertices and m edges that runs in time O(T ∗ (m, n)) where T ∗ is ..."
Abstract
-
Cited by 58 (11 self)
- Add to MetaCart
Abstract. We establish that the algorithmic complexity of the minimum spanning tree problem is equal to its decision-tree complexity. Specifically, we present a deterministic algorithm to find a minimum spanning tree of a graph with n vertices and m edges that runs in time O(T ∗ (m, n)) where T ∗ is the minimum number of edge-weight comparisons needed to determine the solution. The algorithm is quite simple and can be implemented on a pointer machine. Although our time bound is optimal, the exact function describing it is not known at present. The current best bounds known for T ∗ are T ∗ (m, n) = �(m) and T ∗ (m, n) = O(m · α(m, n)), where α is a certain natural inverse of Ackermann’s function. Even under the assumption that T ∗ is superlinear, we show that if the input graph is selected from Gn,m, our algorithm runs in linear time with high probability, regardless of n, m, or the permutation of edge weights. The analysis uses a new martingale for Gn,m similar to the edge-exposure martingale for Gn,p.
A Lightweight Infrastructure for Graph Analytics ∗
"... Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this paper, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of th ..."
Abstract
-
Cited by 30 (2 self)
- Add to MetaCart
(Show Context)
Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this paper, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of these tasks, and (iii) allows application-specific control of task scheduling policies. To support this claim, we describe such an implementation called the Galois system. We demonstrate the capabilities of this infrastructure in three ways. First, we implement more sophisticated algorithms for some of the graph analytics problems tackled by previous DSLs and show that end-to-end performance can be improved by orders of magnitude even on power-law graphs, thanks to the better algorithms facilitated by a more general programming model. Second, we show that, even when an algorithm can be expressed in existing DSLs, the implementation of that algorithm in the more general system can be orders of magnitude faster when the input graphs are road networks and similar graphs with high diameter, thanks to more sophisticated scheduling. Third, we implement the APIs of three existing graph DSLs on top of the common infrastructure in a few hundred lines of code and show that even for power-law graphs, the performance of the resulting implementations often exceeds that of the original DSL systems, thanks to the lightweight infrastructure.
Approximate Sorting
, 2001
"... We show that any comparison based, randomized algorithm to approximate any given ranking of n items within expected Spearman’s footrule distance n 2 /ν(n) needs at least n (min{log ν(n), log n} − 6) comparisons in the worst case. This bound is tight up to a constant factor since there exists a dete ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
We show that any comparison based, randomized algorithm to approximate any given ranking of n items within expected Spearman’s footrule distance n 2 /ν(n) needs at least n (min{log ν(n), log n} − 6) comparisons in the worst case. This bound is tight up to a constant factor since there exists a deterministic algorithm that shows that 6n log ν(n) comparisons are always sufficient.
Minimizing Randomness in Minimum Spanning Tree, Parallel Connectivity, and Set Maxima Algorithms
- In Proc. 13th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'02
, 2001
"... There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing conne ..."
Abstract
-
Cited by 11 (7 self)
- Add to MetaCart
There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing connected components and (minimum) spanning trees in parallel, and the problem of performing sensitivity analysis on shortest path trees and minimum spanning trees. However, while each of these problems has a randomized algorithm whose performance meets a known lower bound, all of these randomized algorithms use a number of random bits which is linear in the number of operations they perform. We address the issue of reducing the number of random bits used in these randomized algorithms. For each of the problems listed above, we present randomized algorithms that have optimal performance but use only a polylogarithmic number of random bits; for some of the problems our optimal algorithms use only log n random bits. Our results represent an exponential savings in the amount of randomness used to achieve the same optimal performance as in the earlier algorithms. Our techniques are general and could likely be applied to other problems.
Soft kinetic data structures
- In SODA ’01: Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
"... We introduce the framework of soft kinetic data structures (SKDS). A soft kinetic data structure is an approximate data structure that can be used to answer queries on a set of moving objects with unpredictable motion. We analyze the quality of a soft kinetic data structure by giving a competitive a ..."
Abstract
-
Cited by 8 (0 self)
- Add to MetaCart
(Show Context)
We introduce the framework of soft kinetic data structures (SKDS). A soft kinetic data structure is an approximate data structure that can be used to answer queries on a set of moving objects with unpredictable motion. We analyze the quality of a soft kinetic data structure by giving a competitive analysis with respect to the dynamics of the system. We illustrate our approach by presenting soft kinetic data structures for maintaining classical data structures: sorted arrays, balanced search trees, heaps, and range trees. We also describe soft kinetic data structures for maintaining the Euclidean minimum spanning trees. 1 Introduction. The need of storing and processing continuously moving data arises in a broad variety of applications, including
Sensitivity Analysis of Minimum Spanning Trees in Sub-Inverse-Ackermann Time
, 2015
"... We present a deterministic algorithm for computing the sensitivity of a minimum spanning tree (MST) or shortest path tree in O(m logα(m,n)) time, where α is the inverse-Ackermann function. This improves upon a long standing bound of O(mα(m,n)) established by Tarjan. Our algo-rithms are based on an e ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
We present a deterministic algorithm for computing the sensitivity of a minimum spanning tree (MST) or shortest path tree in O(m logα(m,n)) time, where α is the inverse-Ackermann function. This improves upon a long standing bound of O(mα(m,n)) established by Tarjan. Our algo-rithms are based on an efficient split-findmin data structure, which main-tains a collection of sequences of weighted elements that may be split into smaller subsequences. As far as we are aware, our split-findmin algorithm is the first with superlinear but sub-inverse-Ackermann complexity. We also give a reduction from MST sensitivity to the MST problem it-self. Together with the randomized linear time MST algorithm of Karger, Klein, and Tarjan, this gives another randomized linear time MST sensi-tivity algorithm.
An Inverse-Ackermann Type Lower Bound for Online Minimum Spanning Tree Verification
- Combinatorica
"... Given a spanning tree T of some graph G, the problem of minimum spanning tree verication is to decide whether T = MST(G). A celebrated result of Komlos shows that this problem can be solved in linear time. Somewhat unexpectedly, MST verication turns out to be useful in actually computing minimum spa ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
Given a spanning tree T of some graph G, the problem of minimum spanning tree verication is to decide whether T = MST(G). A celebrated result of Komlos shows that this problem can be solved in linear time. Somewhat unexpectedly, MST verication turns out to be useful in actually computing minimum spanning trees from scratch. It is this application that has led some to wonder whether a more flexible version of MST Verification could be used to derive a faster deterministic minimum spanning tree algorithm.
Randomized Minimum Spanning Tree Algorithms Using Exponentially Fewer Random Bits
"... For many fundamental problems there exist randomized algorithms that are asymptotically optimal and are superior to the best known deterministic algorithm. Among these are the minimum spanning tree (MST) problem, the MST sensitivity analysis problem, the parallel connected components and parallel mi ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
For many fundamental problems there exist randomized algorithms that are asymptotically optimal and are superior to the best known deterministic algorithm. Among these are the minimum spanning tree (MST) problem, the MST sensitivity analysis problem, the parallel connected components and parallel minimum spanning tree problems, and the local sorting and set maxima problems. (For the first two problems there are provably optimal deterministic algorithms with unknown, and possibly superlinear running times.) One downside of the randomized methods for solving these problems is that they use a number of random bits linear in the size of the input. In this paper we develop some general methods for reducing exponentially the consumption of random bits in comparison based algorithms. In some cases we are able to reduce the number of random bits from linear to nearly constant without affecting the expected running time. Most of our results are obtained by adjusting or reorganizing existing randomized algorithms to work well with a pairwise or O(1)-wise independent sampler. The prominent exception — and the main focus of this paper — is a linear-time randomized minimum spanning tree algorithm that is not derived from the well known Karger-Klein-Tarjan algorithm. In many ways it resembles more closely the deterministic minimum spanning tree algorithms based on Soft Heaps. Further,