Results 1 
8 of
8
A Distributed Processing Algorithm for Solving Integer Programs Using a Cluster of Workstations
 Parallel Computing
, 1994
"... The sequential Branch and Bound Algorithm is the most established method for solving Mixed Integer and Discrete Programming Problems. It is based on the tree search of the possible subproblems of the original problem. There are two goals in carrying out a tree search, namely, (i) finding a good and ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
The sequential Branch and Bound Algorithm is the most established method for solving Mixed Integer and Discrete Programming Problems. It is based on the tree search of the possible subproblems of the original problem. There are two goals in carrying out a tree search, namely, (i) finding a good and ultimately the best integer solution, and (ii) to prove that the best solution has been found or no integer feasible solution exists. We call these the stage 1 and stage 2 of tree search. In general it is extremely difficult to choose the ideal search strategy in stage 1 or stage 2 for a given integer programming (IP) problem. On the other hand by investigating a number of different strategies (and hence different search trees) a good solution can be reached quickly in respect of many practical IP problems. Starting from this observation a parallel branch and bound algorithm has been designed which exploits this two stage approach. In the first stage we investigate a number of alternative se...
Initialization of Parallel Branchandbound Algorithms
, 1994
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.
Dynamic Programming Parallel Implementations for the Knapsack Problem
, 1993
"... A systolic algorithm for the dynamic programming approach to the knapsack problem is presented. The algorithm can run on any number of processors and has optimal time speedup and processor efficiency. The running time of the algorithm is \Theta(mc=q + m) on a ring of q processors, where c is the kna ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
A systolic algorithm for the dynamic programming approach to the knapsack problem is presented. The algorithm can run on any number of processors and has optimal time speedup and processor efficiency. The running time of the algorithm is \Theta(mc=q + m) on a ring of q processors, where c is the knapsack size and m is the number of object types. A new procedure for the backtracking phase of the algorithm with a time complexity \Theta(m) is also proposed which is an improvement on the usual strategies used for backtracking with a time complexity \Theta(m + c). Given a practical implementation, our analysis shows which of two backtracking algorithms (the classical or the modified) is more efficient with respect to the total running time. Experiments have been performed on an iWarp machine for randomly generated instances. They support the theoretical results and show the proposed algorithm performs well for a wide range of problem size.
Parallel and Distributed BranchandBound/A* Algorithms
, 1994
"... In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent man ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent manipulations for most tree data structures, and show its usefulness for implementation on multiprocessors with global shared memory. Some priority queues which are suited for basic operations performed by B&B algorithms are described : the Skewheaps, the funnels and the Splaytrees. We also detail a specific data structure, called treap and designed for A* algorithm. These data structures are implemented on a parallel machine with shared memory : KSR1. For the distributed memory model (DMM), we show that the use of partial cost in the B&B algorithms is not enough to balance nodes between the local queues. Thus, we introduce another notion of priority, called potentiality, between nodes that take...
Changing the Distribution Depth During a Parallel Tree Search
"... . We present a new parallel tree search method for ønding one solution to a constraint satisfaction problem that aims at guaranteeing a speedup greater than a øxed bound. It consists in choosing dynamically the depth of the subtrees to distribute. This choice is based on a criterion which takes into ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
. We present a new parallel tree search method for ønding one solution to a constraint satisfaction problem that aims at guaranteeing a speedup greater than a øxed bound. It consists in choosing dynamically the depth of the subtrees to distribute. This choice is based on a criterion which takes into account a heuristic valuation of the number of nodes of the subtree to parallelize in order to estimate the maximum size of the search space. We show this method is likely to increase the speedup. 1 Introduction Since Constraint Satisfaction Problems (CSPs) fall into the class of NPcomplete problems, they often become quickly intractable as their size grows. Parallel search for one solution is a way to divide the time of the resolution by a more or less important factor and then to allow ønding a solution quicker. This paper deals with the depthørst search (DFS) managed by the Backtrack algorithm when only one solution is needed. When all the solutions of a problem have to be found by ex...
Parallélisation de la recherche arborescente des CSP avec distribution profondeur variable
"... Introduction Les probl#mes de satisfaction de contraintes (CSP) #tant NPcomplets, ils deviennent souvent rapidement intraitables lorsque leur taille grandit. La parall#lisation de la recherche d'une solution est un moyen de diviser le temps de r#solution par un facteur important et ainsi de p ..."
Abstract
 Add to MetaCart
Introduction Les probl#mes de satisfaction de contraintes (CSP) #tant NPcomplets, ils deviennent souvent rapidement intraitables lorsque leur taille grandit. La parall#lisation de la recherche d'une solution est un moyen de diviser le temps de r#solution par un facteur important et ainsi de permettre l'obtention d'une solution dans un d#lai beaucoup plus raisonnable qu'auparavant. Nous nous int#ressons ici # la parall#lisation de la recherche en profondeur d'abord, eoeectu#e par l'algorithme Backtrack lorsqu'on d#sire obtenir une seule solution. Lorsqu'il s'agit de trouver toutes les solutions d'un probl#me, ce qui implique de d#velopper l'arbre de recherche en entier, on arrive # obtenir des performances tout # fait satisfaisantes : l'acc#l#ration est proche de la lin#arit#, c'est#dire que le temps de r#solution est pratiquement divis# par le nombre de processus engag#s, tant en th#orie [KGR94, Prc96] qu'en pratique [FM87, FK88, RKR87]. Par contre, quand on ne souhaite tro
A Distributed Algorithm Solving CSPs with a Low Communication Cost
 In Proceedings of the 8 th International Conference on Tools with Arti��cial Intelligence
, 1996
"... We present a new distributed algorithm which nds all solutions of Constraint Satisfaction Problems. Based on the Backtrack algorithm, it spreads subtrees of the search tree over processes running in parallel. The work is optimally shared among the processes while the communication cost remains low. ..."
Abstract
 Add to MetaCart
We present a new distributed algorithm which nds all solutions of Constraint Satisfaction Problems. Based on the Backtrack algorithm, it spreads subtrees of the search tree over processes running in parallel. The work is optimally shared among the processes while the communication cost remains low. We show that the speedup of the resolution is asymp totically linear as the number of variables increases. Furthermore, we study the addition of Lookahead pruning techniques and Nogood Recording. Experimental results conrm the eOEciency of the algorithm, even if the search tree is very unbalanced. Keywords Constraint Satisfaction Problem, Distributed and Parallel Algorithm, Nogood Recording. 1 1 Introduction The NPcompleteness of the Constraint Satisfaction Problem (CSP) results in the intractabil ity of a wide variety of important problems that can be expressed in the framework of Constraint Satisfaction. Backtrackingbased algorithms are often used to nd solutions and many pruning te...
(to be published by Elsevier in 1994) Initialization of Parallel Branchandbound Algorithms
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract
 Add to MetaCart
(Show Context)
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.