Results 1  10
of
44
Global Optimization of Statistical Functions with Simulated Annealing
 Journal of Econometrics
, 1994
"... Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simu ..."
Abstract

Cited by 126 (1 self)
 Add to MetaCart
Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.
Learning Evaluation Functions for Global Optimization and Boolean Satisfiability
 In Proc. of 15th National Conf. on Artificial Intelligence (AAAI
, 1998
"... This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
This paper describes STAGE, a learning approach to automatically improving search performance on optimization problems. STAGE learns an evaluation function which predicts the outcome of a local search algorithm, such as hillclimbing or WALKSAT, as a function of state features along its search trajectories. The learned evaluation function is used to bias future search trajectories toward better optima. We present positive results on six largescale optimization domains.
Learning Evaluation Functions to Improve Optimization by Local Search
 Journal of Machine Learning Research
, 2000
"... This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited durin ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
This paper describes algorithms that learn to improve search performance on largescale optimization tasks. The main algorithm, Stage, works by learning an evaluation function that predicts the outcome of a local search algorithm, such as hillclimbing or Walksat, from features of states visited during search. The learned evaluation function is then used to bias future search trajectories toward better optima on the same problem. Another algorithm, XStage, transfers previously learned evaluation functions to new, similar optimization problems. Empirical results are provided on seven largescale optimization domains: binpacking, channel routing, Bayesian network structurefinding, radiotherapy treatment planning, cartogram design, Boolean satisfiability, and Boggle board setup.
Distributed Genetic Algorithms for the Floorplan Design Problem
 IEEE Transactions on ComputerAided Design
, 1991
"... AbstractFloorplan design is an important stage in the VLSI design cycle. Designing a floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wirelength measures. This paper presents a method to solve the floorplan design problem using distributed ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
AbstractFloorplan design is an important stage in the VLSI design cycle. Designing a floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wirelength measures. This paper presents a method to solve the floorplan design problem using distributed genetic algorithms. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of our method, and point out the advantages of using this method over other methods, such as simulated annealing. Our method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the bestfound solution, in almost all the problem instances tried. I.
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
An Effective Congestion Driven Placement Framework
 ISPD
, 2002
"... We present a fast but reliable way to detect routing criticalities in VLSI chips. In addition, we show how this congestion estimation can be incorporated into a partitioning based placement algorithm. Different to previous approaches, we do not rerun parts of the placement algorithm or apply a post ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
We present a fast but reliable way to detect routing criticalities in VLSI chips. In addition, we show how this congestion estimation can be incorporated into a partitioning based placement algorithm. Different to previous approaches, we do not rerun parts of the placement algorithm or apply a postplacement optimization, but we use our congestion estimator for a dynamic avoidance of routability problems in one single run of the placement algorithm. Computational experiments on chips with up to 1,300,000 cells are presented: The framework reduces the usage of the most critical routing edges by 9.0% on average, the running time increase for the placement is about 8.7%. However, due to the smaller congestion, the running time of routing tools can be decreased drastically, so the total time for placement and (global) routing is decreased by 47% on average.
Evolutionary Monte Carlo: Applications to C_p Model Sampling and Change Point Problem
 STATISTICA SINICA
, 2000
"... Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Motivated by the success of genetic algorithms and simulated annealing in hard optimization problems, the authors propose a new Markov chain Monte Carlo (MCMC) algorithm so called an evolutionary Monte Carlo algorithm. This algorithm has incorporated several attractive features of genetic algorithms and simulated annealing into the framework of MCMC. It works by simulating a population of Markov chains in parallel, where each chain is attached to a different temperature. The population is updated by mutation (Metropolis update), crossover (partial state swapping) and exchange operators (full state swapping). The algorithm is illustrated through examples of the Cpbased model selection and changepoint identification. The numerical results and the extensive comparisons show that evolutionary Monte Carlo is a promising approach for simulation and optimization.
Interactive PhysicallyBased Manipulation of Discrete/Continuous Models
, 1995
"... Physicallybased modeling has been used in the past to support a variety of interactive modeling tasks including freeform surface design, mechanism design, constrained drawing, and interactive camera control. In these systems, the user interacts with the model by exerting virtual forces, to which t ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Physicallybased modeling has been used in the past to support a variety of interactive modeling tasks including freeform surface design, mechanism design, constrained drawing, and interactive camera control. In these systems, the user interacts with the model by exerting virtual forces, to which the system responds subject to the active constraints. In the past, this kind of interaction has been applicable only to models that are governed by continuous parameters. In this paper we present an extension to mixed continuous /discrete models, emphasizing constrained layout problems that arise in architecture and other domains. When the object being dragged is blocked from further motion by geometric constraints, a local discrete search is triggered, during which transformations such as swapping of adjacent objects may be performed. The result of the search is a "nearby" state in which the target object has been moved in the indicated direction and in which all constraints are satisfied. ...
Simulated Annealing with Extended Neighbourhood
, 1991
"... Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. V ..."
Abstract

Cited by 21 (14 self)
 Add to MetaCart
Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. Various methods have been proposed to reduce the computation time, but they mainly deal with the careful tuning of SA's control parameters. This paper first analyzes the impact of SA's neighbourhood on SA's performance and shows that SA with a larger neighbourhood is better than SA with a smaller one. The paper also gives a general model of SA, which has both dynamic generation probability and acceptance probability, and proves its convergence. All variants of SA can be unified under such a generalization. Finally, a method of extending SA's neighbourhood is proposed, which uses a discrete approximation to some continuous probability function as the generation function in SA, and several impo...
Using Prediction to Improve Combinatorial Optimization Search
 In Proc. of 6th Int'l Workshop on Artificial Intelligence and Statistics
, 1997
"... To appear in AISTATS97 This paper describes a statistical approach to improving the performance of stochastic search algorithms for optimization. Given a search algorithm A, we learn to predict the outcome of A as a function of state features along a search trajectory. Predictions are made by a fun ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
To appear in AISTATS97 This paper describes a statistical approach to improving the performance of stochastic search algorithms for optimization. Given a search algorithm A, we learn to predict the outcome of A as a function of state features along a search trajectory. Predictions are made by a function approximator such as global or locallyweighted polynomial regression; training data is collected by MonteCarlo simulation. Extrapolating from this data produces a new evaluation function which can bias future search trajectories toward better optima. Our implementation of this idea, STAGE, has produced very promising results on two largescale domains. 1 Introduction The problem of combinatorial optimization is simply stated: given a finite state space X and an objective function f : X ! !, find an optimal state x = argmin x2X f(x). Typically, X is huge, and finding an optimal x is intractable. However, there are many heuristic algorithms that attempt to exploit f 's structur...