Results 1 
8 of
8
Breaking through the n 3 barrier: Faster object type inference. Theory and Practice of Object Systems
 4th Int’l Workshop on Foundations of ObjectOriented Languages (FOOL
, 1999
"... Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their calc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their calculi, four of which are firstorder. Palsberg [Pal95] has shown how typability in each one of these systems can be decided in time O(n 3), where n is the size of an untyped object expression, using an algorithm based on dynamic transitive closure. He also shows that each of the type inference problems is hard for polynomial time under logspace reductions. In this paper we show how we can break through the (dynamic) transitive closure bottleneck and improve each one of the four type inference problems from O(n 3) to the following time complexities: no subtyping subtyping w/o rec. types O(n) O(n2) with rec. types O(n log 2 n) O(n2) The key ingredient that lets us “beat ” the worstcase time complexity induced by using general dynamic transitive closure or similar algorithmic methods is that object subtyping is invariant: an object type is a subtype of a “shorter ” type with a subset of the field names if and only if the common fields have equal types. 1
Finding Path Minima in Incremental Unrooted Trees
, 2008
"... Consider a dynamic forest of unrooted trees over a set of n vertices which we update by link operations: Each link operation adds a new edge adjacent to vertices in two different trees. Every edge in the forest has a weight associated with it, and at any time we want to be able to answer a pathmin ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Consider a dynamic forest of unrooted trees over a set of n vertices which we update by link operations: Each link operation adds a new edge adjacent to vertices in two different trees. Every edge in the forest has a weight associated with it, and at any time we want to be able to answer a pathmin query which returns that edge of minimum weight along the path between two given vertices. For the case where the weights are integers we give an algorithm that performs n − 1 link operations and m pathmin queries in O(n + mα(m,n)) time. This extends well known results of Tarjan [11] and Yao [12] to a more general dynamic setting at the cost of restricting the weights to be integers. Using our data structure we get an optimal data structure for a restricted version of the mergeable trees problem [9]. We also suggest a simpler data structures for the case where trees are rooted and the link operation always adds an edge between the root of one tree and an arbitrary vertex of another tree.
Don’t Rush into a Union: Take Time to Find Your Roots
, 2011
"... We present a new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times. Consider incremental connectivity, letting tU be the time to insert an edge and tq be the query time. For tU = Ω(tq), the problem is equivalent to the wellundersto ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We present a new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times. Consider incremental connectivity, letting tU be the time to insert an edge and tq be the query time. For tU = Ω(tq), the problem is equivalent to the wellunderstood union–find problem: INSERTEDGE(s, t) can be implemented by UNION(FIND(s), FIND(t)). This gives worstcase time tU = tq = O(lg n / lg lg n) and amortized tU = tq = O(α(n)). By contrast, we show that if tU = o(lg n / lg lg n), the query time explodes to tq ≥ n 1−o(1). In other words, if the data structure doesn’t have time to find the roots of each disjoint set (tree) during edge insertion, there is no effective way to organize the information! For amortized complexity, we demonstrate a new inverseAckermann type tradeoff in the regime tU = o(tq). A similar lower bound is given for fully dynamic connectivity, where an update time of o(lg n) forces the query time to be n 1−o(1). This lower bound allows for amortization and Las Vegas randomization, and comes close to the known O(lg n · (lg lg n) O(1) ) upper bound. 1
Worstcase and Amortised Optimality in UnionFind (Extended Abstract)
, 1999
"... We study the interplay between worstcase and amortised time bounds for the classic Disjoint Set Union problem (UnionFind). We ask whether it is possible to achieve optimal worstcase and amortised bounds simultaneously. Furthermore we would like to allow a tradeoff between the worstcase time for ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We study the interplay between worstcase and amortised time bounds for the classic Disjoint Set Union problem (UnionFind). We ask whether it is possible to achieve optimal worstcase and amortised bounds simultaneously. Furthermore we would like to allow a tradeoff between the worstcase time for a query and for an update. We answer this question by first providing lower bounds for the possible worstcase time tradeoffs, as well as lower bounds which show where in this tradeoff range optimal amortised time is achievable. We then give an algorithm which tightly matches both lower bounds simultaneously. The lower bounds are provided in the cellprobe model as well as in the algebraic realnumber RAM, and the upper bounds hold for a RAM with logarithmic word size and a modest instruction set. Our lower bounds show that for worstcase query and update time t q and t u respectively, one must have t q = 780 n= log t u ), and only for t q (m; n) can this tradeoff be achieved simultaneou...
Breaking through the n3 barrier: Faster object type inference
 Proc. 4th Int’l Workshop on Foundations of ObjectOriented Languages (FOOL
, 1997
"... Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their cal ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abadi and Cardelli [AC96] have presented and investigated object calculi that model most objectoriented features found in actual objectoriented programming languages. The calculi are innate object calculi in that they are not based on λcalculus. They present a series of type systems for their calculi, four of which are firstorder. Palsberg [Pal95] has shown how typability in each one of these systems can be decided in time O(n3), where n is the size of an untyped object expression, using an algorithm based on dynamic transitive closure. He also shows that each of the type inference problems is hard for polynomial time under logspace reductions. In this paper we show how we can break through the (dynamic) transitive closure bottleneck and improve each one of the four type inference problems from O(n3) to the following time complexities: no subtyping subtyping w/o rec. types O(n) O(n2) with rec. types O(n log2 n) O(n2) The key ingredient that lets us “beat ” the worstcase time complexity induced by using general dynamic transitive closure or similar algorithmic methods is that object subtyping is invariant: an object type is a subtype of a “shorter ” type with a subset of the field names if and only if the common fields have equal types. 1
AT&T Labs
"... We present a new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times. Consider incremental connectivity, letting tu be the time to insert an edge and tq be the query time. For tu = Ω(tq), the problem is equivalent to the wellundersto ..."
Abstract
 Add to MetaCart
We present a new threshold phenomenon in data structure lower bounds where slightly reduced update times lead to exploding query times. Consider incremental connectivity, letting tu be the time to insert an edge and tq be the query time. For tu = Ω(tq), the problem is equivalent to the wellunderstood union–find problem: InsertEdge(s, t) can be implemented byUnion(Find(s),Find(t)). This gives worstcase time tu = tq = O(lg n / lg lg n) and amortized tu = tq = O(α(n)). By contrast, we show that if tu = o(lg n / lg lg n), the query time explodes to tq ≥ n1−o(1). In other words, if the data structure doesn’t have time to find the roots of each disjoint set (tree) during edge insertion, there is no effective way to organize the information! For amortized complexity, we demonstrate a new inverseAckermann type tradeoff in the regime tu = o(tq). A similar lower bound is given for fully dynamic connectivity, where an update time of o(lg n) forces the query time to be n1−o(1). This lower bound allows for amortization and Las Vegas randomization, and comes close to the known O(lg n · (lg lg n)O(1)) upper bound. 1