Results 1 
6 of
6
Anomalies in Parallel BranchandBound Algorithms
, 1984
"... We consider the effects of parallelizing branchandbound algorithms by expanding several live nodes simultaneously. It is shown that it is quite possible for a parallel branchandbound algorithm using n 2 processors to take more time than one using n 1 processors even though n 1 < n 2 . Further ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
We consider the effects of parallelizing branchandbound algorithms by expanding several live nodes simultaneously. It is shown that it is quite possible for a parallel branchandbound algorithm using n 2 processors to take more time than one using n 1 processors even though n 1 < n 2 . Furthermore, it is also possible to achieve speedups that are in excess of the ratio n 2 /n 1 . Experimental results with the 0/1Knapsack and Traveling Salesperson problems are also presented.
Initialization of Parallel Branchandbound Algorithms
, 1994
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.
Scheduling Problems in a Practical Allocation Model
, 1998
"... A parallel computational model is defined which addresses I/O contention, latency, and pipelined message passing between tasks allocated to different processors. The model can be used for parallel taskallocation on either a network of workstations or on a multistage interconnected parallel machi ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
A parallel computational model is defined which addresses I/O contention, latency, and pipelined message passing between tasks allocated to different processors. The model can be used for parallel taskallocation on either a network of workstations or on a multistage interconnected parallel machine. To study performance bounds more closely, basic properties are developed for when the precedence constraints form a directed tree. It is shown that the problem of optimally scheduling a directed onelevel precedence tree on an unlimited number of identical processors in this model is NPhard. The problem of scheduling a directed twolevel precedence tree is also shown to be NPhard even when the system latency is zero. An approximation algorithm is then presented for scheduling directed onelevel task trees on an unlimited number of processors with an approximation ratio of 3. Simulation results show that this algorithm is, in fact, much faster than its worstcase performance bound. Better...
Scalability of Massively Parallel DepthFirst Search
 In DIMACS Workshop
, 1994
"... .We analyze and compare the scalabilityoftwo generic schemes for heuristic depth#rst search on highly parallel MIMD systems. The #rst one employs a task attraction mechanism where the work packets are generated on demand by splitting the donor's stack. Analytical and empirical analyses sho ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
.We analyze and compare the scalabilityoftwo generic schemes for heuristic depth#rst search on highly parallel MIMD systems. The #rst one employs a task attraction mechanism where the work packets are generated on demand by splitting the donor's stack. Analytical and empirical analyses show that this stacksplitting scheme works e#ciently on parallel systems with a small communication diameter and a moderate number of processing elements. The second scheme, searchfrontier splitting, also employs a task attraction mechanism, but uses precomputed work packets taken from a searchfrontier level of the tree. At the beginning, a searchfrontier is generated and stored in the local memories. Then, the processors expand the subtrees of their frontier nodes, communicating only when they run out of work or a solution has been found. Empirical results obtained on a 32 # 32 = 1024 node MIMD system indicate that the searchfrontier splitting scheme incurs fewer overheadsand scale...
Using CSP Languages to Program Parallel Workstation Systems
 Future Gener. Comput. Syst
, 1992
"... During the last decade one of the most relevant events in the computer market has been the large diffusion of workstations. In both industrial and research environments a huge amount of computing is done on personal workstations. Despite the rapid growth in networking technologies, however, a networ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
During the last decade one of the most relevant events in the computer market has been the large diffusion of workstations. In both industrial and research environments a huge amount of computing is done on personal workstations. Despite the rapid growth in networking technologies, however, a network of workstations cannot be easily seen as a global computational resource, although it represents a large amount of computing power. Moreover, its inherent parallelism is not accessible without a heavy effort to modify existing software and/or to develop new code. It is our belief that the CSP model is suitable to develop distributed applications for a particular class of such systems that can be defined Parallel Workstation Systems. This thesis has been tested in the course of the DISC project. In DISC, the language implementation of the CSP model tries to minimize the programming effort toward the development of parallel applications, and a friendly programming environment, integrated in ...
(to be published by Elsevier in 1994) Initialization of Parallel Branchandbound Algorithms
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract
 Add to MetaCart
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.