Results 1  10
of
15
Small forwarding tables for fast routing lookups
 in ACM Sigcomm
, 1997
"... For some time, the networking community has assumed that it is impossible to do IP routing lookups in software fast enough to support gigabit speeds. IP routing lookups must �nd the routing entry with the longest matching pre�x, a task that has been thought to require hardware support at lookup freq ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
For some time, the networking community has assumed that it is impossible to do IP routing lookups in software fast enough to support gigabit speeds. IP routing lookups must �nd the routing entry with the longest matching pre�x, a task that has been thought to require hardware support at lookup frequencies of millions per second. We present a forwarding table data structure designed for quick routing lookups. Forwarding tables are small enough to �t in the cache of a conventional general purpose processor. With the table in cache, a 200 MHz Pentium Pro or a 333 MHz Alpha 21164 can perform a few million lookups per second. This means that it is feasible to do a full routing lookup for each IPpacket at gigabit speeds without special hardware. The forwarding tables are very small, a large routing table with 40,000 routing entries can be compacted to a forwarding table of 150�160 Kbytes. A lookup typically requires less than 100 instructions on an Alpha, using eight memory references accessing a total of 14 bytes. 1
Lower bounds for UnionSplitFind related problems on random access machines
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several dynamic and sta...
Loops in Reeb Graphs of 2Manifolds
 IN: PROC. OF THE 19TH ANNUAL SYMPOSIUM ON COMPUTATIONAL GEOMETRY, ACM PRESS
, 2003
"... Given a Morse function f over a 2manifold with or without boundary, the Reeb graph is obtained by contracting the connected components of the level sets to points. We prove tight upper and lower bounds on the number of loops in the Reeb graph that depend on the genus, the number of boundary compone ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
Given a Morse function f over a 2manifold with or without boundary, the Reeb graph is obtained by contracting the connected components of the level sets to points. We prove tight upper and lower bounds on the number of loops in the Reeb graph that depend on the genus, the number of boundary components, and whether or not the 2manifold is orientable. We also give an algorithm that constructs the Reeb graph in time O(n log n), where n is the number of edges in the triangulation used to represent the 2manifold and the Morse function.
Worst case constant time priority queue
 In Proc. 12th ACMSIAM Symposium on Discrete Algorithms
, 2001
"... We present a new data structure of size O(M) for solving the vEB problem. When this data structure is used in combination with a new memory topology it provides an O(1) worst case time solution. 1 ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
We present a new data structure of size O(M) for solving the vEB problem. When this data structure is used in combination with a new memory topology it provides an O(1) worst case time solution. 1
New Techniques for the UnionFind Problem
 UTRECHT UNIVERSITY
, 1989
"... A wellknown result of Tarjan (cf. [10]) states that a program of up to n UNION and m FIND instructions can be executed in O(n + m.c(m, n)) time on a collection of n elements, where a(m, n) denotes the functional inverse of Ackermann's function. In this paper we develop a new approach to the prob ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A wellknown result of Tarjan (cf. [10]) states that a program of up to n UNION and m FIND instructions can be executed in O(n + m.c(m, n)) time on a collection of n elements, where a(m, n) denotes the functional inverse of Ackermann's function. In this paper we develop a new approach to the problem and prove that the time for the k th FIND can be limited to O(c(k, n)) worst case, while the total cost for the program of UNION's and m FIND's remains bounded by O(n + m.c(m, n)). The technique is part of a family of lgorithms that can achieve various tradeoffs in cost for the individual instructions. The new lgorithm is important in all setmanipulation problems that require frequent FIND's. Because a(m, n) is O(1) in all practical cases, the new lgorithms guarantees that FIND's are essentially O(1) worst case, within the optimal bound for the UNIONFIND problem as a whole. The algorithm
The Temporal Precedence Problem
 Algorithmica
, 1998
"... In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted before element b temporally. We provide a solution to the problem with worstcase time complexity O(lg lg n) per operation, where n is the number of elements inserted. We also demonstrate that the problem has a lower bound of \Omega\Gammaf/ lg n) on pointer machines. Thus the proposed scheme is optimal on pointer machines. Keywords: Algorithms, Dynamic Data Structures, Complexity. 1 Introduction In this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Informally, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. The problem is related to ...
On the Complexity of Dependent AndParallelism in Logic Programming
, 2002
"... Abstract We present results concerning the computational complexity of some of the key execution mechanisms required to handle Dependent AndParallel executions in logic programming. We develop formal abstractions of the problems in terms of dynamic trees, design data structures for efficient soluti ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract We present results concerning the computational complexity of some of the key execution mechanisms required to handle Dependent AndParallel executions in logic programming. We develop formal abstractions of the problems in terms of dynamic trees, design data structures for efficient solutions, and present some lower bound results. This work is part of a larger effort to understand, formalize, and study the complexitytheoretic and algorithmic issues in parallel implementations of logic programming languages. These results have already impacted implementation of novel parallel logic programming systems.
A Note on Predecessor Searching in the Pointer Machine Model
, 2009
"... Predecessor searching is a fundamental data structuring problem and at the core of countless algorithms: given a totally ordered universe U with n elements, maintain a subset S ⊆ U such that for each element x ∈ U its predecessor in S can be found efficiently. During the last thirty years the proble ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Predecessor searching is a fundamental data structuring problem and at the core of countless algorithms: given a totally ordered universe U with n elements, maintain a subset S ⊆ U such that for each element x ∈ U its predecessor in S can be found efficiently. During the last thirty years the problem has been studied extensively and optimal algorithms in many classical models of computation are known. In 1988, Mehlhorn, Näher, and Alt [1] showed an amortized lower bound of Ω(log log n) in the pointer machine model. We give a different proof for this bound which sheds new light on the question of how much power the adversary actually needs.
D.: Computational Issues in Exploiting Dependent AndParallelism
 in Logic Programming: Leftness Detection in Dynamic Search Trees. In: LPAR. (2005) 79–94
"... Abstract. We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness ” in dynamic search trees and related problems. In particular, we show that the problem of testing if a node x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrink ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness ” in dynamic search trees and related problems. In particular, we show that the problem of testing if a node x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrinks at the leaves, can be solved on PPMs in worstcase O((lg lg n) 2) time per operation in the semidynamic case—i.e.,all the operations that add leaves to the tree are performed before any other operations—where n is the number of operations that affect the structure of the tree. We also show that the problem can be solved on PPMs in amortized O((lg lg n) 2) time per operation in the fully dynamic case. 1