Results 1  10
of
85
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 222 (15 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Noisy optimization with evolution strategies
 SIAM Journal on Optimization
"... Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither deriv ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
(Show Context)
Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither derivatives of the objective function are at hand nor differentiability and numerical accuracy can be assumed. However, despite their widespread use, there is little exchange between members of the “classical ” optimization community and people working in the field of evolutionary computation. It is our belief that both sides would benefit from such an exchange. In this paper, we present a brief outline of evolution strategies and discuss some of their properties in the presence of noise. We then empirically demonstrate that for a simple but nonetheless nontrivial noisy objective function, an evolution strategy outperforms other optimization algorithms designed to be able to cope with noise. The environment in which the algorithms are tested is deliberately chosen to afford a transparency of the results that reveals the strengths and shortcomings of the strategies, making it possible to draw conclusions with regard to the design of better optimization algorithms for noisy environments. 1
On the convergence of gridbased methods for unconstrained optimization
 SIAM Journal on Optimization
"... Abstract. The convergence of direct search methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined meshes. An important aspect of the main convergence result is that translation, rotation, sca ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Abstract. The convergence of direct search methods for unconstrained minimization is examined in the case where the underlying method can be interpreted as a grid or pattern search over successively refined meshes. An important aspect of the main convergence result is that translation, rotation, scaling and shearing of the successive grids are allowed. Key words. Gridbased optimization, derivative free optimization, positive basis methods, convergence analysis, multidirectional search. AMS subject classifications. 49M30, 65K05 1. Introduction. Recent survey papers, [1], [7], [10] report on significant renewed interest in algorithms for derivativefree unconstrained optimization. Much of this recent interest has been provoked by new convergence results, (see, for example, [1], [6], [8], [9]). Most of the current derivativefree algorithms for which convergence results have been established belong to one or more of three categories: line search
Reinforcement Learning by Policy Search
, 2000
"... One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations could be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are know ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations could be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. Reinforcement learning means learning a policya mapping of observations into actionsbased on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies being searched is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate various architectures for controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multiagent system. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience reuse. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.
Wedge Trust Region Methods for Derivative Free Optimization
 Mathematical Programming
, 2000
"... A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of s ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
A new method for derivativefree optimization is presented. It is designed for solving problems in which the objective function is smooth and the number of variables is moderate, but the gradient is not available. The method generates a model that interpolates the objective function at a set of sample points, and uses trust regions to promote convergence. The stepgeneration subproblem ensures that all the iterates satisfy a geometric condition and are therefore adequate for updating the model. The sample points are updated using a scheme that improves the accuracy of the interpolation model when needed. Two versions of the method are presented: one using linear models and the other using quadratic models. Numerical tests comparing the new approach with established methods for derivatefree optimization are reported. This work was supported by National Science Foundation grant CCR9987818, and by Department of Energy grant DEFG0287ER25047A004. y Department of Industrial ...
GEOMETRY OF INTERPOLATION SETS IN DERIVATIVE FREE OPTIMIZATION
"... Abstract. We consider derivative free methods based on sampling approaches for nonlinear optimization problems where derivatives of the objective function are not available and cannot be directly approximated. We show how the bounds on the error between an interpolating polynomial and the true funct ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We consider derivative free methods based on sampling approaches for nonlinear optimization problems where derivatives of the objective function are not available and cannot be directly approximated. We show how the bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods. These bounds involve a constant that reflects the quality of the interpolation set. The main task of such a derivative free algorithm is to maintain an interpolation sampling set so that this constant remains small, and at least uniformly bounded. This constant is often described through the basis of Lagrange polynomials associated with the interpolation set. We provide an alternative, more intuitive, definition for this concept and show how this constant is related to the condition number of a certain matrix. This relation enables us to provide a range of algorithms whilst maintaining the interpolation set so that this condition number or the geometry constant remain uniformly bounded. We also derive bounds on the error between the model and the function and between their derivatives, directly in terms of this condition number and of this geometry constant.
A Class of Gradient Unconstrained Minimization Algorithms With Adaptive Stepsize
, 1999
"... In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the first two incorporate techniques for the adaptation of a common stepsize for al ..."
Abstract

Cited by 28 (15 self)
 Add to MetaCart
In this paper the development, convergence theory and numerical testing of a class of gradient unconstrained minimization algorithms with adaptive stepsize are presented. The proposed class comprises four algorithms: the first two incorporate techniques for the adaptation of a common stepsize for all coordinate directions and the other two allow an individual adaptive stepsize along each coordinate direction. All the algorithms are computationally efficient and possess interesting convergence properties utilizing estimates of the Lipschitz constant that are obtained without additional function or gradient evaluations. The algorithms have been implemented and tested on some wellknown test cases as well as on reallife artificial neural network applications and the results have been very satisfactory.
ObjectiveDerivativeFree Methods for Constrained Optimization
 Mathematical Programming
, 1999
"... We propose feasible descent methods for constrained minimization that do not make explicit use of objective derivative information. The methods at each iteration sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
We propose feasible descent methods for constrained minimization that do not make explicit use of objective derivative information. The methods at each iteration sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of firstorder feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linear constraint case is reported. Key words. Constrained optimization, derivativefree method, feasible descent, stationary point, metric regularit...
Snobfit  Stable Noisy Optimization by Branch and Fit
"... this paper produces a userspeci ed number of suggested evaluation points in each step; proceeds by successive partitioning of the box (branch) and building local quadratic models ( t); combines local and global search and allows the user to determine which of both should be emphasized; h ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
this paper produces a userspeci ed number of suggested evaluation points in each step; proceeds by successive partitioning of the box (branch) and building local quadratic models ( t); combines local and global search and allows the user to determine which of both should be emphasized; handles local search from the best point with the aid of trust regions; allows for hidden constraints and assigns to such points a function value based on the function values of nearby feasible points
Policy Search using Paired Comparisons
 Journal of Machine Learning Research
, 2002
"... Direct policy search is a practical way to solve reinforcement learning (RL) problems involving continuous state and action spaces. The goal becomes finding policy parameters that maximize a noisy objective function. The Pegasus method converts this stochastic optimization problem into a determinist ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Direct policy search is a practical way to solve reinforcement learning (RL) problems involving continuous state and action spaces. The goal becomes finding policy parameters that maximize a noisy objective function. The Pegasus method converts this stochastic optimization problem into a deterministic one, by using fixed start states and fixed random number sequences for comparing policies (Ng and Jordan, 2000). We evaluate Pegasus, and new paired comparison methods, using the mountain car problem, and a difficult pursuerevader problem. We conclude that: (i) paired tests can improve performance of optimization procedures; (ii) several methods are available to reduce the `overfitting' effect found with Pegasus; (iii) adapting the number of trials used for each comparison yields faster learning; (iv) pairing also helps stochastic search methods such as differential evolution.