Results 1 
8 of
8
A comparison of annealing techniques for academic course scheduling
 Lecture Notes in Computer Science
, 1998
"... Abstract. In this study we have tackled the NPhard problem of academic class scheduling (or timetabling) at the university level. We have investigated a variety of approaches based on simulated annealing, including meanfield annealing, simulated annealing with three different cooling schedules, an ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Abstract. In this study we have tackled the NPhard problem of academic class scheduling (or timetabling) at the university level. We have investigated a variety of approaches based on simulated annealing, including meanfield annealing, simulated annealing with three different cooling schedules, and the use of a rulebased preprocessor to provide a good initial solution for annealing. The best results were obtained using simulated annealing with adaptive cooling and reheating as a function of cost, and a rulebased preprocessor. This approach enabled us to obtain valid schedules for the timetabling problem for a large university, using a complex cost function that includes student preferences. None of the other methods were able to provide a complete valid schedule. 1
Simulated annealing for graph bisection
 in Proceedings of the 34th Annual IEEE Symposium on Foundations of Computer Science
, 1993
"... We resolve in the affirmative a question of Boppana and Bui: whether simulated annealing can, with high probability and in polynomial time, find the optimal bisection of a random graph in Gnpr when p r = O(n*’) for A 5 2. (The random graph model Gnpr specifies a “planted ” bisection of density r, ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
We resolve in the affirmative a question of Boppana and Bui: whether simulated annealing can, with high probability and in polynomial time, find the optimal bisection of a random graph in Gnpr when p r = O(n*’) for A 5 2. (The random graph model Gnpr specifies a “planted ” bisection of density r, separating two n/2vertex subsets of slightly higher density p.) We show that simulated “annealing ” at an appropriate fixed temperature (i.e., the Metropolis algorithm) finds the unique smallest bisection in O(n2+‘) steps with very high probability, provided A> 1116. (By using a slightly modified neighborhood structure, the number of steps can be reduced to O(n’+‘).) We leave open the question of whether annealing is effective for A in the range 312 < A 5 1116, whose lower limit represents the threshold at which the planted bisection becomes lost amongst other random small bisections. It also remains open whether hillclimbing (i.e., annealing at temperature 0) solves the same problem. 1
A survey of computational approaches to threedimensional layout problems
 COMPUTER AIDED DESIGN
, 2002
"... ..."
Simulated Annealing with Inaccurate Cost Functions
 in Proceedings of the IMACS International Congress of Mathematics and Computer Science
, 1994
"... . Simulated annealing is an algorithm which generates nearoptimal outcomes to combinatorial optimization problems. It is commonly thought to be slow. Costfunction approximation and parallel processing increase simulated annealing speed, but they can cause inaccuracies that degrade the outcome. Pri ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. Simulated annealing is an algorithm which generates nearoptimal outcomes to combinatorial optimization problems. It is commonly thought to be slow. Costfunction approximation and parallel processing increase simulated annealing speed, but they can cause inaccuracies that degrade the outcome. Prior theoretical work has not adequately related costfunction inaccuracy to the runtime or quality of the outcome. We prove these results about annealing with inaccurate costfunctions: 1) Expected cost at equilibrium is exponentially affected by fl=T , where fl limits costfunction rangeerrors and T gives the temperature. 2) Expected cost at equilibrium is exponentially affected by (oe 2 \Gamma oe 2 )=2T 2 , when the errors have a Gaussian distribution. 3) Constraining fl to a constant factor of T guarantees convergence under a 1= log t temperature schedule. 4) A similar constraint guarantees convergence for a fractal space with a geometric temperature schedule. 5) Inaccuracies worse...
Enhancing the performance of memetic algorithms by using a matchingbased recombination algorithm: Results on the number partitioning problem  Results on . . .
 METAHEURISTICS: COMPUTERDECISION MAKING
, 2003
"... The Number Partitioning Problem (MNP) remains as one of the simplesttodescribe yet hardesttosolve combinatorial optimization problems. In this work we use the MNP as a surrogate for several related realworld problems, in order to test new heuristics ideas. To be precise, we study the use of we ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The Number Partitioning Problem (MNP) remains as one of the simplesttodescribe yet hardesttosolve combinatorial optimization problems. In this work we use the MNP as a surrogate for several related realworld problems, in order to test new heuristics ideas. To be precise, we study the use of weightmatching techniques in order to devise smart memetic operators. Several options are considered and evaluated for that purpose. The positive computational results indicate that —despite the MNP may be not the best scenario for exploiting these ideas — the proposed operators can be really promising tools for dealing with more complex problems of the same family.
Optimal Parallelization of Simulated Annealing by State Mixing
, 2001
"... This thesis describes a new, efficient, and general purpose parallel simulated annealing algorithm. The algorithm is based on periodic mixing steps, in which favorable states reproduce and unfavorable ones are destroyed. It runs on a distributed memory Multiple Instructions Multiple Data archite ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This thesis describes a new, efficient, and general purpose parallel simulated annealing algorithm. The algorithm is based on periodic mixing steps, in which favorable states reproduce and unfavorable ones are destroyed. It runs on a distributed memory Multiple Instructions Multiple Data architecture parallel computer. Parallel eciency is controlled by the interval between mixing steps. In this thesis, it is shown that for certain values of this interval found by exhaustive search, the algorithm can give up to 100% parallel eciency on up to 50 processors and 80% parallel efficiency on 100 processors. Moreover, for a given number of processors, there is a range of mixing interval which gives high parallel efficiency. In this thesis, two efficient statistical estimators, namely, the crosscorrelation and variance among processors are defined for finding efficient mixing intervals are constructed which give parallel efficiency of 75% without exhaustive search. This is done by tracking the two statistical estimators right after communication, so as to obtain the lower and upper bounds for the optimal mixing interval.
August 24, 1993
 Phys. Lett. A
, 1994
"... The representation of floating point numbers and the method of generating random floating point numbers can have a significant effect upon the performance of the simulated annealing algorithm. Finite precision may limit the minimum obtainable cost, may result in an optimal minimum temperature, and m ..."
Abstract
 Add to MetaCart
The representation of floating point numbers and the method of generating random floating point numbers can have a significant effect upon the performance of the simulated annealing algorithm. Finite precision may limit the minimum obtainable cost, may result in an optimal minimum temperature, and may imply an optimal chain length. August 24, 1993 I. Introduction Simulated annealing is a stochastic approximation technique which has been used for a wide range of discrete and continuous problems over the past decade [e.g, 14]. Theoretical work [e.g., 3,4] indicates that annealing should approach the global optimum monotonically as the run time increases. Our work, however, suggests that practical implementations may be limited in their performance by the fact that such implementations must work with numbers of limited precision. Experiments on a very small problem show that limited precision may limit the minimum obtainable cost and may result in the existence of an optimal minimum ...