Results 1  10
of
30
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Snobfit  Stable Noisy Optimization by Branch and Fit
"... this paper produces a userspeci ed number of suggested evaluation points in each step; proceeds by successive partitioning of the box (branch) and building local quadratic models ( t); combines local and global search and allows the user to determine which of both should be emphasized; h ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
this paper produces a userspeci ed number of suggested evaluation points in each step; proceeds by successive partitioning of the box (branch) and building local quadratic models ( t); combines local and global search and allows the user to determine which of both should be emphasized; handles local search from the best point with the aid of trust regions; allows for hidden constraints and assigns to such points a function value based on the function values of nearby feasible points
On Trust Region Methods for Unconstrained Minimization Without Derivatives
, 2002
"... We consider some algorithms for unconstrained minimization without derivatives that form linear or quadratic models by interpolation to values of the objective function. Then a new vector of variables is calculated by minimizing the current model within a trust region. Techniques are described for a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We consider some algorithms for unconstrained minimization without derivatives that form linear or quadratic models by interpolation to values of the objective function. Then a new vector of variables is calculated by minimizing the current model within a trust region. Techniques are described for adjusting the trust region radius, and for choosing positions of the interpolation points that maintain not only nonsingularity of the interpolation equations but also the adequacy of the model. Particular attention is given to quadratic models with diagonal second derivative matrices, because numerical experiments show that they are often more efficient than full quadratic models for general objective functions. Finally, some recent research on the updating of full quadratic models is described briefly, using fewer interpolation equations than before. The resultant freedom is taken up by minimizing the Frobenius norm of the change to the second derivative matrix of the model. A preliminary version of this method provides some very promising numerical results.
MNH: A DerivativeFree Optimization Algorithm Using Minimal Norm Hessians
, 2008
"... We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolat ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolates the function at a set of data points. We show how to construct this interpolation set to yield computationally stable parameters for the model and, in doing so, obtain an algorithm which converges to firstorder critical points. Preliminary results are encouraging and show that MNH makes effective use of the points evaluated in the course of the optimization. 1
Highlevel Approach to Modeling of Observed System Behavior
"... Current computer systems and communication networks tend to be highly complex, and they typically hide their internal structure from their users. Thus, for selected aspects of capacity planning, overload control and related applications, it is useful to have a method allowing one to find good and re ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Current computer systems and communication networks tend to be highly complex, and they typically hide their internal structure from their users. Thus, for selected aspects of capacity planning, overload control and related applications, it is useful to have a method allowing one to find good and relatively simple approximations for the observed system behavior. This paper investigates one such approach where we attempt to represent the latter by adequately selecting the parameters of a set of queueing models. We identify a limited number of queueing models that we use as “Building Blocks” (BBs) in our procedure. The selected BBs allow us to accurately approximate the measured behavior of a range of different systems. We propose an approach for selecting and combining suitable BB, as well as for their calibration. Finally, we validate our methodology and discuss the potential and the limitations of the proposed approach. 1
ORBIT: Optimization by radial basis function interpolation in trustregions
 SIAM Journal on Scientific Computing
, 2008
"... Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points are sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDEbased bioremediation plan, are also very encouraging and support ORBIT’s effectiveness on blackbox functions for which no special mathematical structure is known or available.
DCMA, yet another derandomization in CovarianceMatrixAdaptation
 GECCO'07
, 2007
"... In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a fa ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In a preliminary part of this paper, we analyze the necessity of randomness in evolution strategies. We conclude to the necessity of ”continuous”randomness, but with a much more limited use of randomness than what is commonly used in evolution strategies. We then apply these results to CMAES, a famous evolution strategy already based on the idea of derandomization, which uses random independent Gaussian mutations. We here replace these random independent Gaussian mutations by a quasirandom sample. The modification is very easy to do, the modified algorithm is computationally more efficient and its convergence is faster in terms of the number of iterates for a given precision.
Lyapunov methods in nonsmooth optimization, Part I: QuasiNewton algorithms for Lipschitz, regular functions
"... A recent converse Lyapunov theorem for differential inclusions is used to generate a large class of algorithms for nonsmooth optimization. Particular attention is given to quasiNewton algorithms for the minimization of locally Lipschitz, regular functions. 1 Introduction 1.1 Background The focus ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A recent converse Lyapunov theorem for differential inclusions is used to generate a large class of algorithms for nonsmooth optimization. Particular attention is given to quasiNewton algorithms for the minimization of locally Lipschitz, regular functions. 1 Introduction 1.1 Background The focus of this paper is unconstrained nonlinear programming for locally Lipschitz functions. We address the task of designing numerical algorithms that asymptotically determine a point that globally minimizes a locally Lipschitz function defined on Euclidean space. For continuously differentiable functions, this problem and its solutions have reached a very mature state, which is summarized in many excellent textbooks (see, for example, [2],[11]). The nonsmooth optimization problem is more recent. Serious attention was first given to it in the 1960's and, over the years, manyauthors have addressed the problem by imposing various extra assumptions, beyond Lipschitz continuity,onthe function to be...