Results 1 
7 of
7
Monitors, Messages, and Clusters: the p4 Parallel Programming System
"... p4 is a portable library of C and Fortran subroutines for programming parallel computers. It is the current version of a system that has been in use since 1984. It includes features for explicit parallel programming of sharedmemory machines, distributedmemory machines (including heterogeneous netw ..."
Abstract

Cited by 120 (14 self)
 Add to MetaCart
(Show Context)
p4 is a portable library of C and Fortran subroutines for programming parallel computers. It is the current version of a system that has been in use since 1984. It includes features for explicit parallel programming of sharedmemory machines, distributedmemory machines (including heterogeneous networks of workstations), and clusters, by which we mean sharedmemory multiprocessors communicating via message passing. We discuss here the design goals, history, and system architecture of p4 and describe briefly a diverse collection of applications that have demonstrated the utility of p4. 1 Introduction p4 is a library of routines designed to express a wide variety of parallel algorithms portably, efficiently and simply. The goal of portability requires it to use widely accepted models of computation rather than specific vendor implementations of those models. The goal of efficiency requires it to use models of computation relatively close to those provided by the machines themselves and t...
Experiments with DiscriminationTree Indexing and Path Indexing for Term Retrieval
 JOURNAL OF AUTOMATED REASONING
, 1990
"... This article addresses the problem of indexing and retrieving firstorder predicate calculus terms in the context of automated deduction programs. The four retrieval operations of concern are to find variants, generalizations, instances, and terms that unify with a given term. Discriminationtree ..."
Abstract

Cited by 50 (0 self)
 Add to MetaCart
This article addresses the problem of indexing and retrieving firstorder predicate calculus terms in the context of automated deduction programs. The four retrieval operations of concern are to find variants, generalizations, instances, and terms that unify with a given term. Discriminationtree indexing is reviewed, and several variations are presented. The pathindexing method is also reviewed. Experiments were conducted on large sets of terms to determine how the properties of the terms affect the performance of the two indexing methods. Results of the experiments are presented.
Distributing Equational Theorem Proving
, 1993
"... In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected an ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implemented our system and show by nontrivial examples that drastical time speedups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.
A Parallel Completion Procedure for Term Rewriting Systems
 In Conference on Automated Deduction
, 1992
"... We present a parallel completion procedure for term rewriting systems. Despite an extensive literature concerning the wellknown sequential KnuthBendix completion procedure, little attention has been devoted to designing parallel completion procedures. Because naive parallelizations of sequential p ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
We present a parallel completion procedure for term rewriting systems. Despite an extensive literature concerning the wellknown sequential KnuthBendix completion procedure, little attention has been devoted to designing parallel completion procedures. Because naive parallelizations of sequential procedures lead to oversynchronization and poor performance, we employ a transitionbased approach that enables more effective parallelizations. The approach begins with a formulation of the completion procedure as a set of transitions (in the style of Bachmair, Dershowitz, and Hsiang) and proceeds to a highly tuned parallel implementation that runs on a shared memory multiprocessor. The implementation performs well on a number of standard examples.
Random Competition: A Simple, but Efficient Method for Parallelizing Inference Systems
 INSTITUT FUR INFORMATIK, TECHNISCHE UNIVERSITAT MUNCHEN
, 1990
"... We present a very simple parallel execution model suitable for inference systems with nondeterministic choices (ORbranching points). All the parallel processors solve the same task without any communication. Their programs only differ in the initialization of the random number generator used for br ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
We present a very simple parallel execution model suitable for inference systems with nondeterministic choices (ORbranching points). All the parallel processors solve the same task without any communication. Their programs only differ in the initialization of the random number generator used for branch selection in depth first backtracking search. This model, called random competition, permits us to calculate analytically the parallel performance for arbitrary numbers of processors. This can be done exactly and without any experiments on a parallel machine. Finally, due to their simplicity, competition architectures are easy (and therefore lowpriced) to build. As an application of this systematic approach we compute speedup expressions for specific problem classes defined by their runtime distributions. The results vary from a speedup of 1 for linearly degenerate search trees up to clearly "superlinear" speedup for strongly imbalanced search trees. Moreover, we are able to give esti...
On the Correctness of a Distributed Memory GrÃ¶bner Basis Algorithm
 In Rewriting Techniques and Applications
, 1992
"... We present an asynchronous MIMD algorithm for Grobner basis computation. The algorithm is based on the wellknown sequential algorithm of Buchberger. Two factors make the correctness of our algorithm nontrivial: the nondeterminism that is inherent with asynchronous parallelism, and the distributio ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present an asynchronous MIMD algorithm for Grobner basis computation. The algorithm is based on the wellknown sequential algorithm of Buchberger. Two factors make the correctness of our algorithm nontrivial: the nondeterminism that is inherent with asynchronous parallelism, and the distribution of data structures which leads to inconsistent views of the global state of the system. We demonstrate that by describing the algorithm as a nondeterministic sequential algorithm, and presenting the optimized parallel algorithm through a series of refinements to that algorithm, the algorithm is easier to understand and the correctness proof becomes manageable. The proof does, however, rely on algebraic properties of the polynomials in the computation, and does not follow directly from the proof of Buchberger's algorithm.
MaGIC, Matrix Generator for Implication Connectives: Release 2.1 . . .
, 1995
"... This is the documentation for release 2.1 of the program MaGIC (Matrix Generator for Implication Connectives). It is also included with the source of the program which is available from arp.anu.edu.au by ftp. MaGIC allows the user to specify a system of logic as a Hilbert system with axioms and r ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This is the documentation for release 2.1 of the program MaGIC (Matrix Generator for Implication Connectives). It is also included with the source of the program which is available from arp.anu.edu.au by ftp. MaGIC allows the user to specify a system of logic as a Hilbert system with axioms and rules of inference, and it then searches for small algebraic models of that logic. Many axioms involving standard connectives are hardcoded, and a number of logics defined in terms of them are made available. Further connectives and further axioms and rules may be defined by the user. It is easy to customise MaGIC by adding code to implement more axioms and more logics. The documentation explains how to do this. Release 2.1 is a considerable advance on release 2.0 which was made three years ago. All current MaGIC users are urged to upgrade to 2.1 now, as 2.0 will not be actively supported in future. Introduction The program MaGIC (Matrix Generator for Implication Connectives) is int...