Results 1 
3 of
3
Approximating the Minimum Spanning Tree Weight in Sublinear Time
 In Proceedings of the 28th Annual International Colloquium on Automata, Languages and Programming (ICALP
, 2001
"... We present a probabilistic algorithm that, given a connected graph G (represented by adjacency lists) of average degree d, with edge weights in the set {1,...,w}, and given a parameter 0 < ε < 1/2, estimates in time O(dwε−2 log dw ε) the weight of the minimum spanning tree of G with a relativ ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
We present a probabilistic algorithm that, given a connected graph G (represented by adjacency lists) of average degree d, with edge weights in the set {1,...,w}, and given a parameter 0 < ε < 1/2, estimates in time O(dwε−2 log dw ε) the weight of the minimum spanning tree of G with a relative error of at most ε. Note that the running time does not depend on the number of vertices in G. We also prove a nearly matching lower bound of Ω(dwε−2) on the probe and time complexity of any approximation algorithm for MST weight. The essential component of our algorithm is a procedure for estimating in time O(dε−2 log d ε) the number of connected components of an unweighted graph to within an additive error of εn. (This becomes O(ε−2 log 1 ε) for d = O(1).) The time bound is shown to be tight up to within the log d ε factor. Our connectedcomponents algorithm picks O(1/ε2) vertices in the graph and then grows “local spanning trees” whose sizes are specified by a stochastic process. From the local information collected in this way, the algorithm is able to infer, with high confidence, an estimate of the number of connected components. We then show how estimates on the number of components in various subgraphs of G can be used to estimate the weight of its MST. 1
Approximating the Minimum Spanning Tree
"... 1 Introduction Traditionally, a linear time algorithm has been held as the gold standard of efficiency. In a wide variety of settings, however, large data sets have become increasingly common, and it is often desirable and sometimes necessary to find very fast algorithms which can assert nontrivial ..."
Abstract
 Add to MetaCart
(Show Context)
1 Introduction Traditionally, a linear time algorithm has been held as the gold standard of efficiency. In a wide variety of settings, however, large data sets have become increasingly common, and it is often desirable and sometimes necessary to find very fast algorithms which can assert nontrivial properties of the data in sublinear time.