Results 1 
6 of
6
Improved Algorithms for Finding Level Ancestors in Dynamic Trees
 Automata, Languages and Programming, 27th International Colloquium, ICALP 2000, number 1853 in LNCS
, 2000
"... Given a node x at depth d in a rooted tree LevelAncestor(x; i) returns the ancestor to x in depth d i. We show how to maintain a tree under addition of new leaves so that updates and level ancestor queries are being performed in worst case constant time. Given a forest of trees with n nodes wher ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Given a node x at depth d in a rooted tree LevelAncestor(x; i) returns the ancestor to x in depth d i. We show how to maintain a tree under addition of new leaves so that updates and level ancestor queries are being performed in worst case constant time. Given a forest of trees with n nodes where edges can be added, m queries and updates take O(m(m;n)) time. This solves two open problems (P.F.
A linearwork parallel algorithm for finding . . .
, 1994
"... We give the first linearwork parallel algorithm for finding a minimum spanning tree. It is a randomized algorithm, and requires O(2log \Lambda n log n) expected time. It is a modification of the sequential lineartime algorithm of Klein and Tarjan. ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
We give the first linearwork parallel algorithm for finding a minimum spanning tree. It is a randomized algorithm, and requires O(2log \Lambda n log n) expected time. It is a modification of the sequential lineartime algorithm of Klein and Tarjan.
A Practical Minimum Spanning Tree Algorithm Using the Cycle Property
 IN 11TH EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA), NUMBER 2832 IN LNCS
, 2003
"... We present a simple new (randomized) algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, "difficult" inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We present a simple new (randomized) algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, "difficult" inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered impractical. An additional advantage is...
Minimizing Randomness in Minimum Spanning Tree, Parallel Connectivity, and Set Maxima Algorithms
 In Proc. 13th Annual ACMSIAM Symposium on Discrete Algorithms (SODA'02
, 2001
"... There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing conne ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing connected components and (minimum) spanning trees in parallel, and the problem of performing sensitivity analysis on shortest path trees and minimum spanning trees. However, while each of these problems has a randomized algorithm whose performance meets a known lower bound, all of these randomized algorithms use a number of random bits which is linear in the number of operations they perform. We address the issue of reducing the number of random bits used in these randomized algorithms. For each of the problems listed above, we present randomized algorithms that have optimal performance but use only a polylogarithmic number of random bits; for some of the problems our optimal algorithms use only log n random bits. Our results represent an exponential savings in the amount of randomness used to achieve the same optimal performance as in the earlier algorithms. Our techniques are general and could likely be applied to other problems.
An InverseAckermann Style Lower Bound for Online Minimum Spanning Tree Verification
 Combinatorica
"... 1 Introduction The minimum spanning tree (MST) problem has seen a flurry of activity lately, driven largely by the success of a new approach to the problem. The recent MST algorithms [20, 8, 29, 28], despite their superficial differences, are all based on the idea of progressively improving an appro ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
1 Introduction The minimum spanning tree (MST) problem has seen a flurry of activity lately, driven largely by the success of a new approach to the problem. The recent MST algorithms [20, 8, 29, 28], despite their superficial differences, are all based on the idea of progressively improving an approximately minimum solution, until the actual minimum spanning tree is found. It is still likely that this progressive improvement approach will bear fruit. However, the current
CS375011 Randomized Algorithms Winter 2003
"... For clarity we prove the theorem for m = n. Before we get on to the proof we introduce some notation and provides some intuition. We need the following notation. 71 Notation 7.3 k (t) := # bins with load k at time t k (t) := # balls with height k at time t B(n; p) := Binomial distribution wi ..."
Abstract
 Add to MetaCart
For clarity we prove the theorem for m = n. Before we get on to the proof we introduce some notation and provides some intuition. We need the following notation. 71 Notation 7.3 k (t) := # bins with load k at time t k (t) := # balls with height k at time t B(n; p) := Binomial distribution with n Bernoulli trials with probability p Intuition: Clearly, N k (t) N k (t). The following is also obvious: k (1) N k (2) N k (n) Suppose for some B k we have N k (n) B k . Then we have the following bound Pr ball i has height k + 1 j N So, k+1 (n) j j N B(n; ) j The following lemma gives a bound on the number of successes of n Bernoulli trials. Roughly speaking, if the success probability of a trial is less than p, the distribution of total number of successes will be bounded above by B(n; p). Lemma 7.4 (Basic Lemma) Let X 1 ; : : : ; Xn be arbitrary random variables. Let Y 1 ; : : : ; Yn be 0 1 random variables, where Y i =