Results 1 
6 of
6
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 129 (13 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Detection And Remediation Of Stagnation In The NelderMead Algorithm Using A Sufficient Decrease Condition
 SIAM J. OPTIM
, 1997
"... The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective function is smooth. Failure of this condition is an indicator of potential stagnation. As a remedy we propose a new step, which we call an oriented restart, which reinitializes the simplex to a smaller one with orthogonal edges which contains an approximate steepest descent step from the current best point. We also give results that apply when objective function is a lowamplitude perturbation of a smooth function. We illustrate our results with some numerical examples.
The Simplex Gradient and Noisy Optimization Problems
 in Computational Methods in Optimal Design and Control
, 1998
"... this paper we consider objective functions that are perturbations of simple, smooth functions. The surface in on the left in Figure 1, taken from [24], and the graph on the right illustrate this type of problem. Figure 1: Optimization Landscapes 0 5 10 15 20 25 0 5 10 15 20 25 80 60 40 20 0 20 0 ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
this paper we consider objective functions that are perturbations of simple, smooth functions. The surface in on the left in Figure 1, taken from [24], and the graph on the right illustrate this type of problem. Figure 1: Optimization Landscapes 0 5 10 15 20 25 0 5 10 15 20 25 80 60 40 20 0 20 0.5 1.5 2.5 3.5 4.5 21.51 0.5 0.5 1 1.5 2 The perturbations may be results of discontinuities or nonsmoth effects in the underlying models, randomness in the function evaluation, or experimental or measurement errors. Conventional gradientbased methods will be trapped in local minima even if the noise is smooth. Many classes of methods for noisy optimization problems are based on function information computed on sequences of simplices. The NelderMead, [18], multidirectional search, [8], [21], and implicit filtering, [12], methods are three examples. The performance of such methods can be explained in terms of the difference approximation of the gradient that is implicit in the function evaluations they perform.
Preface
"... Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed. ..."
Abstract
 Add to MetaCart
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.
SIAM REVIEW c ○ 2003 Society for Industrial and Applied Mathematics Vol. 45, No. 3, pp. 385–482 Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods ∗
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.