Results 1  10
of
34
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
Convergence of Simulated Annealing using FosterLyapunov Criteria
, 1999
"... Simulated annealing is a popular and much studied method for maximizing functions on finite or compact spaces. For noncompact state spaces, the method is still sound, but convergence results are scarce. We show here how to prove convergence in such cases, for Markov chains satisfying suitable drift ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Simulated annealing is a popular and much studied method for maximizing functions on finite or compact spaces. For noncompact state spaces, the method is still sound, but convergence results are scarce. We show here how to prove convergence in such cases, for Markov chains satisfying suitable drift and minorization conditions.
Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization
"... We present an extension of continuous domain Simulated Annealing. Our algorithm employs a globally reaching candidate generator, adaptive stochastic acceptance probabilities, and converges in probability to the optimal value. An application to simulationoptimization problems with asymptotically dim ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present an extension of continuous domain Simulated Annealing. Our algorithm employs a globally reaching candidate generator, adaptive stochastic acceptance probabilities, and converges in probability to the optimal value. An application to simulationoptimization problems with asymptotically diminishing errors is presented. Numerical results on a noisy proteinfolding problem are included.
Simulated annealing: Rigorous finitetime guarantees for optimization on continuous domains
 Advances in Neural Information Processing Systems 20
, 2008
"... Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formula ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finitetime performance in the optimization of functions of continuous variables. The results hold universally for any optimization problem on a bounded domain and establish a connection between simulated annealing and uptodate theory of convergence of Markov chain Monte Carlo methods on continuous domains. This work is inspired by the concept of finitetime learning with known accuracy and confidence developed in statistical learning theory. Optimization is the general problem of finding a value of a vector of variables θ that maximizes (or minimizes) some scalar criterion U(θ). The set of all possible values of the vector θ is called the optimization domain. The elements of θ can be discrete or continuous variables. In the first case
Convergence and First Hitting Time of Simulated Annealing Algorithms for Continuous Global Optimization
"... In this paper simulated annealing algorithms for continuous global optimization are considered. Under the simplifying assumption of known optimal value, the convergence of the algorithms and an upper bound for the expected first hitting time, i.e. the expected number of iterations before reaching th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper simulated annealing algorithms for continuous global optimization are considered. Under the simplifying assumption of known optimal value, the convergence of the algorithms and an upper bound for the expected first hitting time, i.e. the expected number of iterations before reaching the global optimum value within accuracy ", are established. The obtained results are compared with those for the ideal algorithm PAS (Pure Adaptive Search) and for the simple PRS (Pure Random Search) algorithm. KEYWORDS: global optimization, simulated annealing, convergence, first hitting time 1 Introduction The simulated annealing approach was inspired by a physical phenomenon. If we reduce the temperature of a liquid, the thermal mobility of the molecules is lost. If the decrease is slow enough a pure crystal is formed, corresponding to a state of minimum energy. If the decrease is too fast a polycrystalline or an amorphous state with higher energy are reached. In [21] a Monte Carlo meth...
AUTOMATED DESIGN OF APPLICATIONSPECIFIC SUPERSCALAR PROCESSORS
, 2006
"... Automated design of superscalar processors can provide future systemonchip (SOC) designers with a keyturn method of generating superscalar processors that are Paretooptimal in terms of performance, energy consumption, and area for the target application program(s). Unfortunately, current optimiz ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Automated design of superscalar processors can provide future systemonchip (SOC) designers with a keyturn method of generating superscalar processors that are Paretooptimal in terms of performance, energy consumption, and area for the target application program(s). Unfortunately, current optimization methods are based on timeconsuming cycleaccurate simulation, unsuitable for analysis of hundreds of thousands of design options that is required to arrive at Paretooptimal designs. This dissertation bridges the gap between a large design space of superscalar processors and the inability of cycleaccurate simulation to analyze a large design space, by providing a computationally and conceptually simple analytical method for generating Paretooptimal superscalar processor designs. The proposed and evaluated analytical method consists of three parts: (1) a method for analytically estimating the performance in terms a cyclesperinstruction (CPI) using the application program statistics and the superscalar processor parameters, (2) a method of analytically estimating various energy consuming activities using the application program statistics and the superscalar processor parameters, and (3) a method of finding the Pareto
Random search algorithms
, 2009
"... Random search algorithms are useful for many illstructured global optimization problems with continuous and/or discrete variables. Typically random search algorithms sacrifice a guarantee of optimality for finding a good solution quickly with convergence results in probability. Random search algori ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Random search algorithms are useful for many illstructured global optimization problems with continuous and/or discrete variables. Typically random search algorithms sacrifice a guarantee of optimality for finding a good solution quickly with convergence results in probability. Random search algorithms include simulated annealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, crossentropy, stochastic approximation, multistart and clustering algorithms, to name a few. They may be categorized as global (exploration) versus local (exploitation) search, or instancebased versus modelbased. However, one feature these methods share is the use of probability in determining their iterative procedures. This article provides an overview of these random search algorithms, with a probabilistic view that ties them together. A random search algorithm refers to an algorithm that uses some kind of randomness or probability (typically in the form of a pseudorandom number generator) in the definition of the method, and in the literature, may be called a Monte Carlo method or a stochastic algorithm. The term metaheuristic is also commonly associated with random search algorithms. Simulated annealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, crossentropy, stochastic approximation, multistart, clustering algorithms, and other random search methods are being widely applied to continuous and discrete global optimization problems, see, for example,
Homotopy Optimization methods for Global Optimization
, 2005
"... Abstract. We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimize ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Stochastic optimization on continuous domains with finitetime guarantees by Markov chain Monte Carlo Methods
"... We introduce bounds on the finitetime performance of Markov chain Monte Carlo (MCMC) algorithms in solving global stochastic optimization problems defined over continuous domains. It is shown that MCMC algorithms with finitetime guarantees can be developed with a proper choice of the target distri ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We introduce bounds on the finitetime performance of Markov chain Monte Carlo (MCMC) algorithms in solving global stochastic optimization problems defined over continuous domains. It is shown that MCMC algorithms with finitetime guarantees can be developed with a proper choice of the target distribution and by studying their convergence in total variation norm. This work is inspired by the concept of finitetime learning with known accuracy and confidence developed in statistical learning theory.