Results 1  10
of
42
Parallel Programming using Functional Languages
, 1991
"... I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn Runciman initiated my interest in functional programming. I am grateful to Phil Trinder for his simulator, on which mine is based, and Will Partain for his help with LaTex and graphs. I would like to thank the Science and Engineering Research Council of Great Britain for their financial support. Finally, I would like to thank Michelle, whose culinary skills supported me whilst I was writingup.The Imagination the only nation worth defending a nation without alienation a nation whose flag is invisible and whose borders are forever beyond the horizon a nation whose motto is why have one or the other when you can have one the other and both
Programming a Hypercube Multicomputer
, 1988
"... We describe those features of distributed memory MIMD hypercube multicomputers that are necessary to obtain efficient programs. Several examples are developed. These illustrate the effectiveness of different programming strategies. ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
We describe those features of distributed memory MIMD hypercube multicomputers that are necessary to obtain efficient programs. Several examples are developed. These illustrate the effectiveness of different programming strategies.
The Characterization of DataAccumulating Algorithms
 Proceedings of the International Parallel Processing Symposium
, 1998
"... A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown th ..."
Abstract

Cited by 21 (19 self)
 Add to MetaCart
A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown that this class is identical to the class of online algorithms. The parallel implementation of dalgorithms is then investigated. It is found that, in general, the speedup achieved through parallelism can be made arbitrarily large for almost any such algorithm. On the other hand, we prove that for dalgorithms whose static counterparts manifest only unitary speedup, no improvement is possible through parallel implementation.
On the Best Search Strategy in Parallel BranchandBound  BestFirstSearch vs. Lazy DepthFirstSearch.
 Annals of Operations Research
, 1996
"... or because pruning and evaluation tests are more effective in DFS due to the presence of better incumbents. 1 Introduction. One of the key issues of searchbased algorithms in general and B&Balgorithms in particular is the search strategy employed: In which order should the unexplored parts of t ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
or because pruning and evaluation tests are more effective in DFS due to the presence of better incumbents. 1 Introduction. One of the key issues of searchbased algorithms in general and B&Balgorithms in particular is the search strategy employed: In which order should the unexplored parts of the solution space be searched? Different search strategies have different properties regarding time efficiency and memory consumption, both when considered in a sequential and a parallel setting. Supported by the EU HCM project SCOOP and the Danish NSF project EPOS M. Perregaard and J. Clausen / Search Strategies in Parallel Branch and Bound 2 In parallel B&B one often regards the BestFirstSearch strategy (BeFS) and the DepthFirstSearch strategy (DFS) to be two of the prime candidates  BeFS due to expectations of efficiency and theoretical properties regarding anomalies, and DFS for reasons of space efficiency. However BeFS requires that the bou
Strategies for the parallel implementation of metaheuristics
 Essays and Surveys in Metaheuristics
, 2002
"... Abstract. Parallel implementationsof metaheuristicsappear quite naturally asan effective alternative to speed up the search for approximate solutions of combinatorial optimization problems. They not only allow solving larger problems or finding improved solutions with respect to their sequential cou ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Abstract. Parallel implementationsof metaheuristicsappear quite naturally asan effective alternative to speed up the search for approximate solutions of combinatorial optimization problems. They not only allow solving larger problems or finding improved solutions with respect to their sequential counterparts, but they also lead to more robust algorithms. We review some trends in parallel computing and report recent results about linear speedups that can be obtained with parallel implementations using multiple independent processors. Parallel implementations of tabu search, GRASP, genetic algorithms, simulated annealing, and ant colonies are reviewed and discussed to illustrate the main strategies used in the parallelization of different metaheuristics and their hybrids. 1. Introduction. Although
Regular Versus Irregular Problems and Algorithms.
 In Proc. of IRREGULAR'95
, 1995
"... . Viewing a parallel execution as a set of tasks that execute on a set of processors, a main problem is to find a schedule of the tasks that provides an efficient execution. This usually leads to divide algorithms into two classes: static and dynamic algorithms, depending on whether the schedule dep ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
. Viewing a parallel execution as a set of tasks that execute on a set of processors, a main problem is to find a schedule of the tasks that provides an efficient execution. This usually leads to divide algorithms into two classes: static and dynamic algorithms, depending on whether the schedule depends on the indata or not. To improve this rough classification we study, on some key applications of the Stratag` eme project [21, 22], the different ways schedules can be obtained and the associated overheads. This leads us to propose a classification based on regularity criteria i.e. measures of how much an algorithm is regular (or irregular). For a given algorithm, this expresses more the quality of the schedules that can be found (irregular versus regular) as opposed to the way the schedules are obtained (dynamic versus static). These studies reveal some paradigms of parallel programming for irregular algorithms. Thus, in a second part we study a parallel programming model that takes i...
Initialization of Parallel Branchandbound Algorithms
, 1994
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.
Distributed Combinatorial Optimization
 PROC. OF SOFSEM'93, CZECH REPUBLIK
, 1993
"... This paper reports about research projects of the University of Paderborn in the field of distributed combinatorial optimization. We give an introduction into combinatorial optimization and a brief definition of some important applications. As a first exact solution method we describe branch & boun ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
This paper reports about research projects of the University of Paderborn in the field of distributed combinatorial optimization. We give an introduction into combinatorial optimization and a brief definition of some important applications. As a first exact solution method we describe branch & bound and present the results of our work on its distributed implementation. Results of our distributed implementation of iterative deepening conclude the first part about exact methods. In the second part we give an introduction into simulated annealing as a heuristic method and present results of its parallel implementation. This part is concluded with a brief description of genetic algorithms and some other heuristic methods together with some results of their distributed implementation.
Parallelization of the scatter search for the pmedian problem
 Parallel Computing
, 2003
"... q ..."
Parallel BestFirst BranchandBound in Discrete Optimization: a Framework
 IN SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS IN PARALLEL
, 1995
"... In discrete optimization problems, we search for an optimal solution among all vectors in a discrete solution space that satisfy a set of constraints, and the search efficiency is of considerable importance since exhaustive search is often impracticable. The method called branchandbound (noted B&B ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In discrete optimization problems, we search for an optimal solution among all vectors in a discrete solution space that satisfy a set of constraints, and the search efficiency is of considerable importance since exhaustive search is often impracticable. The method called branchandbound (noted B&B) is a heuristic tree search algorithm used in this context. Its principle lies in successive decompositions of the original problem in smaller disjoint subproblems until an optimal solution is found, and the search avoids visiting some subproblems which are known not to contain an optimal solution. Given that disjoint subproblems can be decomposed simultaneously and independently, parallel processing has been widely considered as an additional source of improvement in search efficiency, using the set of processors to concurrently decompose several subproblems at each iteration. Parallel B&B is traditionally considered as an irregular parallel algorithm due to the fact that the structure o...