Results 1 
5 of
5
Parallel Execution of Prolog Programs: A Survey
"... Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic ..."
Abstract

Cited by 61 (24 self)
 Add to MetaCart
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and runtime systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Orparallelism, Andparallelism, and combinations of the two. We also explore some related issues, such as memory
On the Complexity of OrParallelism
, 1999
"... We formalize the implementation mechanisms required to support orparallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correc ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
We formalize the implementation mechanisms required to support orparallel execution of logic programs in terms of operations on dynamic data structures. Upper and lower bounds are derived, in terms of the number of operations n performed on the data structure, for the problem of guaranteeing correct semantics during orparallel execution. The lower bound \Omega\Gammand n) formally proves the impossibility of achieving an ideal implementation (i.e., parallel implementation with constant time overhead per operation). We also derive an upper bound of ~ O( 3 p n) per operation for orparallel execution. This upper bound is far better than what has been achieved in the existing orparallel systems and indicates that faster implementations may be feasible.
The Temporal Precedence Problem
 Algorithmica
, 1998
"... In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted before element b temporally. We provide a solution to the problem with worstcase time complexity O(lg lg n) per operation, where n is the number of elements inserted. We also demonstrate that the problem has a lower bound of \Omega\Gammaf/ lg n) on pointer machines. Thus the proposed scheme is optimal on pointer machines. Keywords: Algorithms, Dynamic Data Structures, Complexity. 1 Introduction In this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Informally, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. The problem is related to ...
On the Complexity of Dependent AndParallelism in Logic Programming
, 2002
"... Abstract We present results concerning the computational complexity of some of the key execution mechanisms required to handle Dependent AndParallel executions in logic programming. We develop formal abstractions of the problems in terms of dynamic trees, design data structures for efficient soluti ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract We present results concerning the computational complexity of some of the key execution mechanisms required to handle Dependent AndParallel executions in logic programming. We develop formal abstractions of the problems in terms of dynamic trees, design data structures for efficient solutions, and present some lower bound results. This work is part of a larger effort to understand, formalize, and study the complexitytheoretic and algorithmic issues in parallel implementations of logic programming languages. These results have already impacted implementation of novel parallel logic programming systems.
Efficient Algorithms for the Temporal Precedence Problem
 Information Processing Letters
, 1998
"... this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study reg ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study regarding the complexity of this problem on pointer machines.