Results 1 
8 of
8
Simulated Annealing Algorithms For Continuous Global Optimization
, 2000
"... INTRODUCTION In this paper we consider Simulated Annealing algorithms (SA in what follows) applied to continuous global optimization problems, i.e. problems with the following form f = min x2X f(x); (1.1) where X ` ! n is a continuous domain, often assumed to be compact, which, combined with ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
INTRODUCTION In this paper we consider Simulated Annealing algorithms (SA in what follows) applied to continuous global optimization problems, i.e. problems with the following form f = min x2X f(x); (1.1) where X ` ! n is a continuous domain, often assumed to be compact, which, combined with the continuity or lower semicontinuity of f , guarantees the existence of the minimum value f . SA algorithms are based on an analogy with a physical phenomenon: while at high temperatures the molecules in a liquid move freely, if the temperature is slowly decreased the thermal mobility of the molecules is lost and they form a pure crystal which also corresponds to a state of minimum energy. If the temperature is decreased too quickly (the so called quenching) a liquid metal rather ends up in a polycrystalline or amorphous state with
A New Evolutionary Approach to the Degree Constrained Minimum Spanning Tree Problem
 IEEE Transactions on Evolutionary Computation
, 2000
"... Finding the degreeconstrained minimum spanning tree (dMST) of a graph is a well studied NPhard problem which is important in network design. We introduce a new method which improves on the best technique previously published for solving the dMST, either using heuristic or evolutionary app ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Finding the degreeconstrained minimum spanning tree (dMST) of a graph is a well studied NPhard problem which is important in network design. We introduce a new method which improves on the best technique previously published for solving the dMST, either using heuristic or evolutionary approaches. The basis of this encoding is a spanningtree construction algorithm which we call the Randomised Primal Method (RPM), based on the wellknown Prim's algorithm [6], and an extension [4] which we call `dPrim's'. We describe a novel encoding for spanning trees, which involves using the RPM to interpret lists of potential edges to include in the growing tree. We also describe a random graph generator which produces particularly challenging dMST problems. On these and other problems, we find that an evolutionary algorithm (EA) using the RPM encoding outperforms the previous best published technique from the operations research literature, and also outperforms simulated...
Dobrovodsky .D Distributed Static Mapping and Dynamic Load Balancing Tools under PVM
 In: 1st Austrian  Hungarian Workshop on Distributed and Parallel Systems
, 1996
"... Abstract: This paper describes the static and dynamic task allocation tools in PVM environment for distributed memory parallel systems. For the static mapping the objective function is used to evaluate the optimality of the allocation of a task graph onto a processor graph. Together with our optimiz ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: This paper describes the static and dynamic task allocation tools in PVM environment for distributed memory parallel systems. For the static mapping the objective function is used to evaluate the optimality of the allocation of a task graph onto a processor graph. Together with our optimization method also augmented simulated annealing and heuristic move exchange methods in the distributed form are implemented. For dynamic task allocation the semidistributed approach was designed based on the division of processor network topology into independent and symmetric spheres. Distributed static mapping (DSM) and dynamic load balancing (DLB) tools are controlled by user window interface. DSM and DLB tools are integrated together with software monitor (PG_PVM) in GRAPNEL environment. Optimal planning of parallel program execution in a distributed memory parallel computer (DMPC) solves the response speed. Optimal allocation comes out from the assumption that the program execution time depends upon uniform load of the processors and upon interprocessor communication minimization. In this paper, our attention is concentrated on the diffusion method for static mapping and on the semidistributed approach for the dynamic task allocation. In our distributed static mapping tool also augmented simulated annealing and heuristic move exchange methods are implemented [1], [2]. To specify an appropriate optimization goal it is necessary to create a cost function, which provides a realistic evaluation of the communication and computation overhead. For the given graphs S (task
Static Mapping Methods for Processor Networks
"... Development of the high performance parallel computer systems suffers today from lack of the software that would allow simple programmingwith consequent optimal and safe program execution. The main goal of the research for this scope is to provide a programming environment that allows applicatio ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Development of the high performance parallel computer systems suffers today from lack of the software that would allow simple programmingwith consequent optimal and safe program execution. The main goal of the research for this scope is to provide a programming environment that allows application programmers to develop parallel programs without worrying about the physical machine they are programming. This paper presents the mapping algorithm for distributed memory, parallel message passing systems using an objective function to evaluate the optimality of mapping a task graph onto processor graph. Our optimization method is compared with other ones, some of them issuing from artificial intelligence or operations research. In our experiments randomly generated tasks and processors graphs are used. Different approaches were compared with regard to these criterions: mapping success, relative difference of solution, relative quality. 1.
On the Design of an Adaptive Simulated Annealing Algorithm
"... Abstract. In this paper, we demonstrate the ease in which an adaptive simulated annealing algorithm can be designed. Specifically, we use the adaptive annealing schedule known as the modified Lam schedule to apply simulated annealing to the weighted tardiness scheduling problem with sequencedepende ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. In this paper, we demonstrate the ease in which an adaptive simulated annealing algorithm can be designed. Specifically, we use the adaptive annealing schedule known as the modified Lam schedule to apply simulated annealing to the weighted tardiness scheduling problem with sequencedependent setups. The modified Lam annealing schedule adjusts the temperature to track the theoretical optimal rate of accepted moves. Employing the modified Lam schedule allows us to avoid the often tedious tuning of the annealing schedule; as the algorithm tunes itself for each instance during problem solving. Our results show that an adaptive simulated annealer can be competitive when compared to highly tuned, hand crafted algorithms. Specifically, we compare our results to a stateoftheart genetic algorithm for weighted tardiness scheduling with sequencedependent setups. Our study serves as an illustration of the ease with which a parameterfree simulated annealer can be designed and implemented. 1
Automatic Configuration of Parallel Programs for Processor Networks
"... : This paper describes the mapping algorithm for distributed memory, parallel message passing systems using the objective function to evaluate the optimality of the mapping of a task graph onto a processor graph. Our optimization method is compared with other ones, some of them issuing from artifici ..."
Abstract
 Add to MetaCart
: This paper describes the mapping algorithm for distributed memory, parallel message passing systems using the objective function to evaluate the optimality of the mapping of a task graph onto a processor graph. Our optimization method is compared with other ones, some of them issuing from artificial intelligence or operations research. In our experiments randomly generated tasks and processor graphs are used. Keywords: static and dynamic mapping, multicomputer, scheduling, load balancing, smoothing. 1 Introduction Optimal planning of parallel program execution in distributed environment of a multicomputer on the basis of message passing solves the response speed. Static and dynamic mappings can be available and effective way of solution. The theory of optimal mapping comes out from the assumption that the program execution time depends upon uniform load of the processors and upon interprocessor communication minimization. In this paper, our attention is concentrated on the diffusio...
Distributed Mapping Tool under PVM
"... : This paper describes the mapping algorithm for distributed memory, parallel message passing systems using the objective function to evaluate the optimality of the mapping of a task graph onto a processor graph. Our optimisation method is compared with other ones, some of them issuing from artifici ..."
Abstract
 Add to MetaCart
: This paper describes the mapping algorithm for distributed memory, parallel message passing systems using the objective function to evaluate the optimality of the mapping of a task graph onto a processor graph. Our optimisation method is compared with other ones, some of them issuing from artificial intelligence or operations research. In our experiments randomly generated tasks and processor graphs are used. Keywords: static and dynamic mapping, multicomputer, scheduling, load balancing, smoothing. 1 Introduction Optimal planning of parallel program execution in distributed environment of a multicomputer on the basis of message passing solves the response speed. The theory of optimal mapping comes out from the assumption that the program execution time depends upon uniform load of the processors and upon interprocessor communication minimisation. In this paper, our attention is concentrated on the diffusion method for static mapping. For static mapping a parallel program can be re...
SD Task Allocation Tool under PVM
"... : This paper describes the static and dynamic task allocation tool under PVM environment for distributed memory parallel systems. For the static allocation the objective function is used to evaluate the optimality of the allocation of a task graph onto a processor graph. Toghether with our optimisat ..."
Abstract
 Add to MetaCart
: This paper describes the static and dynamic task allocation tool under PVM environment for distributed memory parallel systems. For the static allocation the objective function is used to evaluate the optimality of the allocation of a task graph onto a processor graph. Toghether with our optimisation method also augmented simulated annealing and heuristic move exchange methods are implemented. For dynamic task allocation the semidistributed approach was designed based on the division of processor network topology into independent and symmetric spheres. S&D task allocation tool is controlled by user window interface  allocation toolbox. Keywords: distributed memory parallel systems, static and dynamic allocation, multicomputer, load balancing. 1 Introduction Optimal planning of parallel program execution in distributed environment of a multicomputer on the basis of message passing solves the response speed. The theory of optimal allocation comes out from the assumption that the pr...