Results 1  10
of
59
Parallel Programming using Functional Languages
, 1991
"... I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn Runciman initiated my interest in functional programming. I am grateful to Phil Trinder for his simulator, on which mine is based, and Will Partain for his help with LaTex and graphs. I would like to thank the Science and Engineering Research Council of Great Britain for their financial support. Finally, I would like to thank Michelle, whose culinary skills supported me whilst I was writingup.The Imagination the only nation worth defending a nation without alienation a nation whose flag is invisible and whose borders are forever beyond the horizon a nation whose motto is why have one or the other when you can have one the other and both
Programming a Hypercube Multicomputer
, 1988
"... We describe those features of distributed memory MIMD hypercube multicomputers that are necessary to obtain efficient programs. Several examples are developed. These illustrate the effectiveness of different programming strategies. ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
We describe those features of distributed memory MIMD hypercube multicomputers that are necessary to obtain efficient programs. Several examples are developed. These illustrate the effectiveness of different programming strategies.
Strategies for the parallel implementation of metaheuristics
 Essays and Surveys in Metaheuristics
, 2002
"... Abstract. Parallel implementationsof metaheuristicsappear quite naturally asan effective alternative to speed up the search for approximate solutions of combinatorial optimization problems. They not only allow solving larger problems or finding improved solutions with respect to their sequential cou ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Parallel implementationsof metaheuristicsappear quite naturally asan effective alternative to speed up the search for approximate solutions of combinatorial optimization problems. They not only allow solving larger problems or finding improved solutions with respect to their sequential counterparts, but they also lead to more robust algorithms. We review some trends in parallel computing and report recent results about linear speedups that can be obtained with parallel implementations using multiple independent processors. Parallel implementations of tabu search, GRASP, genetic algorithms, simulated annealing, and ant colonies are reviewed and discussed to illustrate the main strategies used in the parallelization of different metaheuristics and their hybrids. 1. Introduction. Although
On the Best Search Strategy in Parallel BranchandBound  BestFirstSearch vs. Lazy DepthFirstSearch.
 Annals of Operations Research
, 1996
"... or because pruning and evaluation tests are more effective in DFS due to the presence of better incumbents. 1 Introduction. One of the key issues of searchbased algorithms in general and B&Balgorithms in particular is the search strategy employed: In which order should the unexplored parts ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
or because pruning and evaluation tests are more effective in DFS due to the presence of better incumbents. 1 Introduction. One of the key issues of searchbased algorithms in general and B&Balgorithms in particular is the search strategy employed: In which order should the unexplored parts of the solution space be searched? Different search strategies have different properties regarding time efficiency and memory consumption, both when considered in a sequential and a parallel setting. Supported by the EU HCM project SCOOP and the Danish NSF project EPOS M. Perregaard and J. Clausen / Search Strategies in Parallel Branch and Bound 2 In parallel B&B one often regards the BestFirstSearch strategy (BeFS) and the DepthFirstSearch strategy (DFS) to be two of the prime candidates  BeFS due to expectations of efficiency and theoretical properties regarding anomalies, and DFS for reasons of space efficiency. However BeFS requires that the bou
The Characterization of DataAccumulating Algorithms
 Proceedings of the International Parallel Processing Symposium
, 1998
"... A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown th ..."
Abstract

Cited by 20 (18 self)
 Add to MetaCart
(Show Context)
A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown that this class is identical to the class of online algorithms. The parallel implementation of dalgorithms is then investigated. It is found that, in general, the speedup achieved through parallelism can be made arbitrarily large for almost any such algorithm. On the other hand, we prove that for dalgorithms whose static counterparts manifest only unitary speedup, no improvement is possible through parallel implementation.
Regular Versus Irregular Problems and Algorithms.
 In Proc. of IRREGULAR'95
, 1995
"... . Viewing a parallel execution as a set of tasks that execute on a set of processors, a main problem is to find a schedule of the tasks that provides an efficient execution. This usually leads to divide algorithms into two classes: static and dynamic algorithms, depending on whether the schedule dep ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
. Viewing a parallel execution as a set of tasks that execute on a set of processors, a main problem is to find a schedule of the tasks that provides an efficient execution. This usually leads to divide algorithms into two classes: static and dynamic algorithms, depending on whether the schedule depends on the indata or not. To improve this rough classification we study, on some key applications of the Stratag` eme project [21, 22], the different ways schedules can be obtained and the associated overheads. This leads us to propose a classification based on regularity criteria i.e. measures of how much an algorithm is regular (or irregular). For a given algorithm, this expresses more the quality of the schedules that can be found (irregular versus regular) as opposed to the way the schedules are obtained (dynamic versus static). These studies reveal some paradigms of parallel programming for irregular algorithms. Thus, in a second part we study a parallel programming model that takes i...
Parallelization of the scatter search for the pmedian problem
 Parallel Computing
, 2003
"... q ..."
(Show Context)
Distributed Combinatorial Optimization
 PROC. OF SOFSEM'93, CZECH REPUBLIK
, 1993
"... This paper reports about research projects of the University of Paderborn in the field of distributed combinatorial optimization. We give an introduction into combinatorial optimization and a brief definition of some important applications. As a first exact solution method we describe branch & ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
This paper reports about research projects of the University of Paderborn in the field of distributed combinatorial optimization. We give an introduction into combinatorial optimization and a brief definition of some important applications. As a first exact solution method we describe branch & bound and present the results of our work on its distributed implementation. Results of our distributed implementation of iterative deepening conclude the first part about exact methods. In the second part we give an introduction into simulated annealing as a heuristic method and present results of its parallel implementation. This part is concluded with a brief description of genetic algorithms and some other heuristic methods together with some results of their distributed implementation.
Global Optimization of Nonconvex Nonlinear Programs Using Parallel Branch and Bound
, 1995
"... A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions and twice differentiable transcendental functions, in which finite upper and lower bounds can be placed on each variable. The algorithm uses rectangular partitions of the variable domain and a new bounding program based on convex/concave envelopes and positive definite combinations of quadratic terms. The algorithm is deterministic and obtains convergence with final regions of finite size. The partitioning strategy uses a sensitivity analysis of the bounding program to predict the best variable to split and the split location. Two versions of the algorithm are considered, the first using a local NLP algorithm (MINOS) and the second using a sequence of lower bounding programs in the search fo...
Initialization of Parallel Branchandbound Algorithms
, 1994
"... Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Four different initialization methods for parallel Branchandbound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.