Results 1 
7 of
7
Distributing Equational Theorem Proving
, 1993
"... In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected an ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implemented our system and show by nontrivial examples that drastical time speedups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.
A Heterogeneous Parallel Deduction System
 In Proc. FGCS'92 Workshop W3
, 1992
"... This paper describes the architecture, implementation and performance, of a heterogeneous parallel deduction system (HPDS). The HPDS uses multiple deduction components, each of which attempts to find a refutation of the same input set, but using different deduction formats. The components cooperate ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper describes the architecture, implementation and performance, of a heterogeneous parallel deduction system (HPDS). The HPDS uses multiple deduction components, each of which attempts to find a refutation of the same input set, but using different deduction formats. The components cooperate by distributing clauses they generate to other components. The HPDS has been implemented in PrologDLinda. PrologDLinda provides appropriate data transfer and synchronisation facilities for implementing parallel deduction systems. The performance of the HPDS has been investigated. Parallel Deduction Systems A parallel deduction system is one in which multiple deduction components run in parallel on separate processors. This is distinct from those deduction systems which run multiple deduction components alternately, such as the unit preference system [Wos, Carlson & Robinson#G.A.,#1964], and those which are only conceptually parallel systems. Parallel deduction systems can be categorised ...
The GSAT/SAFamiliy  Relating greedy satisifability testing to simulated annealing
, 1994
"... In this paper, we investigate and relate various variants of the greedy satisfiability tester GSAT. We present these algorithms as members of a whole family of algorithms for finding a model for satisfiable propositional logic formulas. In particular, all algorithms can be formulated as instances of ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
In this paper, we investigate and relate various variants of the greedy satisfiability tester GSAT. We present these algorithms as members of a whole family of algorithms for finding a model for satisfiable propositional logic formulas. In particular, all algorithms can be formulated as instances of the same generic frame GenSAT. Comparing the algorithms, we do not only concentrate on their overall performance, but are also interested in how properties like locality or different kinds of randomness influence the performance. To the end we define a new, theoretically complete instance of GenSAT. This variant can be viewed as a reformulation of simulated annealing (SA) within the GSAT family and thus, defines a link between GSAT and SA. For most of the algorithms experiments have been performed using very hard, randomly generated propositional logic formulas. The results of these experiments are also reported.
Parallel Path Planning with Multiple Evasion Strategies
, 2002
"... Probabilistic path planning driven by a potential field is a well established technique and has been successfully exploited to solve complex problems arising in a variety of domains. However, planners implementing this approach are rather inefficient in dealing with certain types of local minima occ ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Probabilistic path planning driven by a potential field is a well established technique and has been successfully exploited to solve complex problems arising in a variety of domains. However, planners implementing this approach are rather inefficient in dealing with certain types of local minima occurring in the potential field, especially those characterized by deep or large attraction basins. In this paper, we present a potential field planner combining "smart" escape motions from local minima with parallel computation to improve overall performance. The results obtained show significant improvement in planning time, along with remarkable reduction in standard deviation. A performance comparison on a benchmark problem of the potential field planner and an existing, stateoftheart planner is also included. Our investigation confirms the effectiveness of potential field as heuristic to solve difficult path planning problems.
Performance Analysis of Competitive ORParallel Theorem Proving
, 1991
"... With random competition we propose a method for parallelizing arbitrary theorem provers. We can prove high efficiency (compared with other parallel theorem provers) of random competition on highly parallel architectures with thousands of processors. This method is suited for all kinds of distributed ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
With random competition we propose a method for parallelizing arbitrary theorem provers. We can prove high efficiency (compared with other parallel theorem provers) of random competition on highly parallel architectures with thousands of processors. This method is suited for all kinds of distributed memory architectures, particularly for large networks of high performance workstations since no communication between the processors is necessary during runtime. On a set of examples we show the performance of random competition applied to the model elimination theorem prover SETHEO. Besides the speedup results for random competition our theoretical analysis gives fruitful insight in the interrelation between searchtree structure, runtime distribution and parallel performance of ORparallel search in general.
Efficiency of Parallel Programs in MultiTasking Environments
, 1993
"... The conventional definition of efficiency of a parallel program is based on the assumption that nodes are homogeneous and exclusively available for the tasks of the program. In this paper we present a more general definition of efficiency, omitting the assumptions of homogeneity and exclusivity. Thi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The conventional definition of efficiency of a parallel program is based on the assumption that nodes are homogeneous and exclusively available for the tasks of the program. In this paper we present a more general definition of efficiency, omitting the assumptions of homogeneity and exclusivity. This definition can be applied to parallel programs e.g. in multiuser environments or to isolated parts of complex parallel applications. We propose a definition of the dynamic efficiency relating to a single program run. Derived from the dynamic efficiency, we present the definition of the stochastic efficiency, relating to the average of several runs with a stochastic model of the load on the nodes. The stochastic efficiency can be used to define standard performance measures of parallel programs in distributed multiuser environments. Furthermore, we present some analytical and simulation results for two examples, using Markov processes to model the system load. 1 Introduction We consider ...
On the Definition of Speedup
, 1993
"... We propose an alternative definition for the speedup of parallel algorithms. Let A be a sequential algorithm and B a parallel algorithm for solving the same problem. If A and/or B are randomized or if we are interested in their performance on a probability distribution of problem instances, the runn ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose an alternative definition for the speedup of parallel algorithms. Let A be a sequential algorithm and B a parallel algorithm for solving the same problem. If A and/or B are randomized or if we are interested in their performance on a probability distribution of problem instances, the running times are described by random variables T A and T B . The speedup is usually defined as E[T A ]=E[T B ] where E is the arithmetic mean. This notion of speedup delivers just a number, i.e. much information about the distribution is lost. For example, there is no variance of the speedup. To define a measure for possible fluctuations of the speedup, a new notion of speedup is required. The basic idea is to define speedup as M(T A =T B ) where the functional form of M has to be determined. Also, we argue that in many cases M(T A =T B ) is more informative than E[T A ]=E[T B ] for a typical user of A and B. We present a set of intuitive axioms that any speedup function M(T...