Results 1  10
of
21
Geometric Range Searching and Its Relatives
 CONTEMPORARY MATHEMATICS
"... ... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching and describe their application to other related searching problems. ..."
Abstract

Cited by 266 (39 self)
 Add to MetaCart
... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching and describe their application to other related searching problems.
Parallel Execution of Prolog Programs: A Survey
"... Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic ..."
Abstract

Cited by 80 (25 self)
 Add to MetaCart
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and runtime systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Orparallelism, Andparallelism, and combinations of the two. We also explore some related issues, such as memory
On the Complexity of OrParallelism
, 1999
"... We formalize the implementation mechanisms required to support orparallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correc ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
We formalize the implementation mechanisms required to support orparallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correct semantics during orparallel execution. The lower bound \Omega\Gammand n) formally proves the impossibility of achieving an ideal implementation (i.e., parallel implementation with constant time overhead per operation). We also derive an upper bound of ~ O( 3 p n) per operation for orparallel execution. This upper bound is far better than what has been achieved in the existing orparallel systems and indicates that faster implementations may be feasible.
Thin Heaps, Thick Heaps
, 2006
"... The Fibonacci heap was devised to provide an especially efficient implementation of Dijkstra’s shortest path algorithm. Although asyptotically efficient, it is not as fast in practice as other heap implementations. Expanding on ideas of Høyer, we describe three heap implementations (two versions of ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
The Fibonacci heap was devised to provide an especially efficient implementation of Dijkstra’s shortest path algorithm. Although asyptotically efficient, it is not as fast in practice as other heap implementations. Expanding on ideas of Høyer, we describe three heap implementations (two versions of thin heaps and one of thick heaps) that have the same amortized efficiency as Fibonacci heaps but need less space and promise better practical performance. As part of our development, we fill in a gap in Høyer’s analysis.
The Temporal Precedence Problem
 Algorithmica
, 1998
"... In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted before element b temporally. We provide a solution to the problem with worstcase time complexity O(lg lg n) per operation, where n is the number of elements inserted. We also demonstrate that the problem has a lower bound of \Omega\Gammaf/ lg n) on pointer machines. Thus the proposed scheme is optimal on pointer machines. Keywords: Algorithms, Dynamic Data Structures, Complexity. 1 Introduction In this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Informally, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. The problem is related to ...
Efficient Algorithms for the Temporal Precedence Problem
 Information Processing Letters
, 1998
"... this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study reg ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study regarding the complexity of this problem on pointer machines.
Computing monadic fixedpoints in lineartime on doublylinked data structures
"... Abstract Detlef Seese has shown that firstorder queries on boundeddegree graphs can be computed in lineartime. We extend this result by using connected doublylinked data structures (modeled in logic over a singulary vocabulary one which permits only monadic predicates and functions). The first ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract Detlef Seese has shown that firstorder queries on boundeddegree graphs can be computed in lineartime. We extend this result by using connected doublylinked data structures (modeled in logic over a singulary vocabulary one which permits only monadic predicates and functions). The first result is that firstorder sentences can then be evaluated by an automaton which works directly in place on these singulary models, without changing their size or shape, and using no external resources whatsoever. In particular, this evaluation algorithm satisfies the finitevisit property: the number of times each datum is read from or written to is a uniformly limited constant. The second result analyzes the complexity of monadic fixedpoints in the same vocabulary and shows that they too are in lineartime (though we use a RAM model for this).
Computational Issues in Exploiting Dependent AndParallelism in Logic Programming: . . .
 IN LOGIC PROGRAMMING: LEFTNESS DETECTION IN DYNAMIC SEARCH TREES. IN: LPAR. (2005) 79–94
"... We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness” in dynamic search trees and related problems. In particular, we show that the problem of testing if a node x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrinks at the le ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present efficient Pure Pointer Machine (PPM) algorithms to test for “leftness” in dynamic search trees and related problems. In particular, we show that the problem of testing if a node x is in the leftmost branch of the subtree rooted in node y, in a dynamic tree that grows and shrinks at the leaves, can be solved on PPMs in worstcase O((lg lg n)²) time per operation in the semidynamic case—i.e.,all the operations that add leaves to the tree are performed before any other operations—where n is the number of operations that affect the structure of the tree. We also show that the problem can be solved on PPMs in amortized O((lg lg n)²) time per operation in the fully dynamic case.
Worstcase and Amortised Optimality in UnionFind (Extended Abstract)
, 1999
"... We study the interplay between worstcase and amortised time bounds for the classic Disjoint Set Union problem (UnionFind). We ask whether it is possible to achieve optimal worstcase and amortised bounds simultaneously. Furthermore we would like to allow a tradeoff between the worstcase time for ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study the interplay between worstcase and amortised time bounds for the classic Disjoint Set Union problem (UnionFind). We ask whether it is possible to achieve optimal worstcase and amortised bounds simultaneously. Furthermore we would like to allow a tradeoff between the worstcase time for a query and for an update. We answer this question by first providing lower bounds for the possible worstcase time tradeoffs, as well as lower bounds which show where in this tradeoff range optimal amortised time is achievable. We then give an algorithm which tightly matches both lower bounds simultaneously. The lower bounds are provided in the cellprobe model as well as in the algebraic realnumber RAM, and the upper bounds hold for a RAM with logarithmic word size and a modest instruction set. Our lower bounds show that for worstcase query and update time t q and t u respectively, one must have t q = 780 n= log t u ), and only for t q (m; n) can this tradeoff be achieved simultaneou...