Results 1  10
of
23
A Methodology for Granularity Based Control of Parallelism in Logic Programs
 Journal of Symbolic Computation, Special Issue on Parallel Symbolic Computation
, 1996
"... ..."
Parallel Execution of Prolog Programs: A Survey
"... Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic ..."
Abstract

Cited by 61 (24 self)
 Add to MetaCart
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and runtime systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Orparallelism, Andparallelism, and combinations of the two. We also explore some related issues, such as memory
Automatic Parallelization of Irregular and PointerBased Computations: Perspectives from Logic and Constraint Programming
 Parallel Computing
, 1997
"... . Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointer ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
. Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for interprocedural pointer aliasing analysis for independence detection and...
Reasoning About Concurrent Objects
 In: Proc. AsiaPacific Software Engineering Conf. (APSEC '95), IEEE, Los Alamitos, Cal
, 1995
"... Embedded specifications in objectoriented (OO) languages such as Eiffel and Sather are based on a rigorous approach towards validation, compatibility and reusability of sequential programs. The underlying method of "designbycontract" is based on Hoare logic for which concurrency extensions exist. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Embedded specifications in objectoriented (OO) languages such as Eiffel and Sather are based on a rigorous approach towards validation, compatibility and reusability of sequential programs. The underlying method of "designbycontract" is based on Hoare logic for which concurrency extensions exist. However concurrent OO languages are still in their infancy. They have inherently imperative facets, such as object identity, sharing, and synchronisation, which cannot be ignored in the semantics. Any marriage of objects and concurrency requires a tradeoff in a space of intertwined qualities. This paper summarises our work on a type system, calculus and an operational model for concurrent objects in a minimal extension of the Eiffel and Sather languages (cSather). We omit concurrency control constructs and instead use assertions as synchronisation constraints for asynchronous functions. We show that this provides a framework in which subtyping and concurrency can coexist. 1 Introduction C...
Abstract specialization and its application to program parallelization
 VI International Workshop on Logic Program Synthesis and Transformation, number 1207 in LNCS
, 1997
"... Abstract. Program specialization optimizes programs for known values of the input. It is often the case that the set of possible input values is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specializatio ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Abstract. Program specialization optimizes programs for known values of the input. It is often the case that the set of possible input values is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract values (substitutions), multiple specialization to automatic program parallelization in the &Prolog compiler. Abstract executability, the main concept underlying abstract specialization, is formalized, the design of the specialization system presented, and a nontrivial example of specialization in automatic parallelization is given. 1
Tree Shaped Computations as a Model for Parallel Applications
 In ALV'98 Workshop on application based load balancing. SFB 342, TU Munchen
, 1998
"... It is shown how a large class of applications can be parallelized by modeling them as tree shaped computations. In particular this class contains many highly irregular and completely unpredictable computations as they occur in heuristic search. ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
It is shown how a large class of applications can be parallelized by modeling them as tree shaped computations. In particular this class contains many highly irregular and completely unpredictable computations as they occur in heuristic search.
The Design And Implementation Of Massively Parallel Knowledge Representation And Reasoning Systems: A Connectionist Approach
, 1996
"... Efficient knowledge representation and reasoning is an important component of intelligent activity, and is a crucial aspect in the design of largescale intelligent systems. This dissertation explores the design, analysis, and implementation of massively parallel knowledge representation and reasoni ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Efficient knowledge representation and reasoning is an important component of intelligent activity, and is a crucial aspect in the design of largescale intelligent systems. This dissertation explores the design, analysis, and implementation of massively parallel knowledge representation and reasoning systems which can encode very large knowledge bases and respond to a class of queries in realtime, with reasoning episodes expected to span a fraction of a second. The dissertation attempts to design efficient, largescale knowledge base systems by: (i) exploiting massive parallelism; and (ii) constraining representational and inferential capabilities to achieve tractability, while still retaining sufficient expressive power to capture a broad class of reasoning in intelligent systems. To this end, shruti, a connectionist reasoning system which models reflexive i.e., effortless and spontaneousreasoning serves as the knowledge representation and reasoning framework. Shrutibased mas...
Platypus: A platform for distributed answer set solving
 in Proc. of the Eighth International Conference on Logic Programming and Nonmonotonic Reasoning
, 2005
"... Abstract. We propose a model to manage the distributed computation of answer sets within a general framework. This design incorporates a variety of software and hardware architectures and allows its easy use with a diverse cadre of computational elements. Starting from a generic algorithmic scheme, ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. We propose a model to manage the distributed computation of answer sets within a general framework. This design incorporates a variety of software and hardware architectures and allows its easy use with a diverse cadre of computational elements. Starting from a generic algorithmic scheme, we develop a platform for distributed answer set computation, describe its current state of implementation, and give some experimental results. 1
On the Complexity of Parallel Implementation of Logic Programs (Extended Abstract)
, 1997
"... We study several datastructures and operations that commonly arise in parallel implementations of logic programming languages. The main problems that arise in implementing such parallel systems are abstracted out and precisely stated. Upper and lower bounds are derived for several of these problems ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We study several datastructures and operations that commonly arise in parallel implementations of logic programming languages. The main problems that arise in implementing such parallel systems are abstracted out and precisely stated. Upper and lower bounds are derived for several of these problems. We prove a lower bound of \Omega (log n) on the overhead incurred in implementing even a simplified version of orparallelism. We prove that the aliasing problem in parallel logic programming is at least as hard as the unionfind problem. We prove that an andparallel implementation can be realized on an extended pointer machine with an O(1) overhead.
Some Techniques for Automated, ResourceAware Distributed and Mobile Computing in a MultiParadigm Programming System
 In Proc. of EURO–PAR 2004, number 3149 in LNCS
, 2004
"... Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to di#erent receiving nodes in a highbandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves su ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to di#erent receiving nodes in a highbandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves su#cient computational cost when compared to the task creation and communication costs and other such practical overheads.