Results 1 
7 of
7
VLSI cell placement techniques
 ACM Computing Surveys
, 1991
"... VLSI cell placement problem is known to be NP complete. A wide repertoire of heuristic algorithms exists in the literature for efficiently arranging the logic cells on a VLSI chip. The objective of this paper is to present a comprehensive survey of the various cell placement techniques, with emphasi ..."
Abstract

Cited by 75 (0 self)
 Add to MetaCart
VLSI cell placement problem is known to be NP complete. A wide repertoire of heuristic algorithms exists in the literature for efficiently arranging the logic cells on a VLSI chip. The objective of this paper is to present a comprehensive survey of the various cell placement techniques, with emphasis on standard ce11and macro
Adapting temperature for some randomized local search algorithms, in preparation
 Proceedings of the First International Conference on Autonomous Agents
, 1998
"... Abstract: A heuristic learning algorithm is introduced to adapt the temperature for a randomized local search procedure based on a Boltzmann machine. Experimental results are given to confirm that the method improves both efficiency and more especially reliability. Further experiments are described ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract: A heuristic learning algorithm is introduced to adapt the temperature for a randomized local search procedure based on a Boltzmann machine. Experimental results are given to confirm that the method improves both efficiency and more especially reliability. Further experiments are described which investigate how the choice of temperature affects the algorithm’s performance. Keywords:optimisation. 1
Simulated Annealing with Time dependent Energy Function via Sobolev inequalities
 Sobolev Inequalities, Preprint 94069, SFB 343
, 1994
"... We analyze the Simulated Annealing Algorithm with an energy function U t that depends on time. Assuming some regularity conditions on U t (especially that U t does not change too quickly in time), and choosing a logarithmic cooling schedule for the algorithm, we derive bounds on the RadonNikodym de ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We analyze the Simulated Annealing Algorithm with an energy function U t that depends on time. Assuming some regularity conditions on U t (especially that U t does not change too quickly in time), and choosing a logarithmic cooling schedule for the algorithm, we derive bounds on the RadonNikodym density of the distribution of the annealing algorithm at time t with respect to the invariant measure at time t. Moreover we estimate the entrance time of the algorithm into typical subsets V of the state space in terms of ß t (V c ). Keywords: Simulated Annealing, Sobolev inequalities, Spectral gap, Markov processes 1 Introduction Let X be a finite set. The well known Simulated Annealing (SA) algorithm is an inhomogeneous Markov process Y t on X with the aim to minimize a given function U : X ! R. The idea behind SA is to think of U as an energy function and to choose the Markov process in such a way that the transition kernel at time t has at its invariant measure ß t , the Gibbs distrib...
Learning as Applied to Simulated Annealing
"... Stochastic combinatorial optimization techniques, such as simulated annealing and genetic algorithms, have become increasingly important in design automation as the size of design problems have grown and the design objectives have become increasingly complex. However, stochastic algorithms are often ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Stochastic combinatorial optimization techniques, such as simulated annealing and genetic algorithms, have become increasingly important in design automation as the size of design problems have grown and the design objectives have become increasingly complex. However, stochastic algorithms are often slow since a large number of random design perturbations are required to achieve an acceptable result  they have no builtin "intelligence". In this paper, we show that incremental, statistical learning techniques can improve the quality of results and reduce the number of expensive costfunction evaluations for stochastic optimization for a particular solution quality. In particular, simulated annealing was selected as representative stochastic optimization approach and the cellbased layout placement problem was used to evaluate the utility of such a learningbased approach. In this work, we used regression to learn the properties of the solution space and have tested the trained algori...
Improved lower bound on the Shannon capacity of . . .
, 2000
"... An independent set with 108 vertices in the strong product of four 7cycles (C7 2 \Theta C7 2 \Theta C7 2 \Theta C7 ) is given. This improves the best known lower bound for the Shannon capacity of the graph C7 which is the zeroerror capacity of the corresponding noisy channel. The search was done b ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An independent set with 108 vertices in the strong product of four 7cycles (C7 2 \Theta C7 2 \Theta C7 2 \Theta C7 ) is given. This improves the best known lower bound for the Shannon capacity of the graph C7 which is the zeroerror capacity of the corresponding noisy channel. The search was done by a computer program using the "simulated annealing" algorithm with a constant time temperature schedule.
A Framework for modelling stochastic optimisation algorithms with Markov chains
"... While various aspects of nature: evolution, clonal selection and annealing, have been the source of inspiration for optimisation algorithms, it is not always clear how and why the algorithms work well on some problems and poorly on other problems. In this thesis we consider properties of exact model ..."
Abstract
 Add to MetaCart
While various aspects of nature: evolution, clonal selection and annealing, have been the source of inspiration for optimisation algorithms, it is not always clear how and why the algorithms work well on some problems and poorly on other problems. In this thesis we consider properties of exact models of optimisation algorithms and relate these properties to specific operators. To facilitate this lower level of understanding of how optimisation algorithms work, we perform a case study of modular modelling on an example of an Artificial Immune System (AIS) algorithm, the BCell Algorithm (BCA). From a case study of modular modelling of the BCA, we derive a framework for modelling stochastic optimisation algorithms with Markov chains. Based on a Markov chain model of the BCA we obtain a proof of convergence of the algorithm, bounds for the rate of convergence of the algorithm and a brief numerical analysis of the Markov chain model of the algorithm. The framework demonstrates that optimisation algorithms can conceptually be split into two parts: search operators and state operators. Search operators are represented by a “sample matrix”. State operators are represented by a “possible transit matrix”. These matrices can be combined by two equations to form the transition matrix of a Markov chain model of the algorithm. Equations (5.11) and (5.12) are the key to the framework, they allow the construction of the transition matrix from the sample matrix and possible transit matrix. We apply the framework to create a Markov chain model of a Hill Climber, rewrite an existing model of Simulated Annealing in terms of the framework and produce a novel Markov chain model of the �1� � � Evolutionary Strategy with fitness proportional mutation. These models, along with the model of the BCA, demonstrate that the framework is not restricted to a particular field of optimisation. 2
Learning as Applied to Stochastic Optimization for Standard Cell Placement
"... Stochastic combinatorial optimization techniques, such as simulated annealing and genetic algorithms, have become increasingly important in design automation as the size of design problems have grown and the design objectives have become increasingly complex. However, stochastic algorithms are often ..."
Abstract
 Add to MetaCart
Stochastic combinatorial optimization techniques, such as simulated annealing and genetic algorithms, have become increasingly important in design automation as the size of design problems have grown and the design objectives have become increasingly complex. However, stochastic algorithms are often slow since a large number of random design perturbations are required to achieve an acceptable resultthey have no builtin intelligence. In this paper, we show that incremental, statistical learning techniques can improve the quality of results and reduce the number of expensive costfunction evaluations for stochastic optimization for a particular solution quality. In particular, simulated annealing was selected as representative stochastic optimization approach and the cellbased layout placement problem was used to evaluate the utility of such a learningbased approach. In this work, we used regression to learn the properties of the solution space and have tested the trained algorithm on...