Results 1 
9 of
9
Convergence Properties of the NelderMead Simplex Method in Low Dimensions
 SIAM Journal of Optimization
, 1998
"... Abstract. The Nelder–Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder–Mead algorithm. This paper pr ..."
Abstract

Cited by 248 (3 self)
 Add to MetaCart
Abstract. The Nelder–Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder–Mead algorithm. This paper presents convergence properties of the Nelder–Mead algorithm applied to strictly convex functions in dimensions 1 and 2. We prove convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2. A counterexample of McKinnon gives a family of strictly convex functions in two dimensions and a set of initial conditions for which the Nelder–Mead algorithm converges to a nonminimizer. It is not yet known whether the Nelder–Mead method can be proved to converge to a minimizer for a more specialized class of convex functions in two dimensions. Key words. direct search methods, Nelder–Mead simplex methods, nonderivative optimization AMS subject classifications. 49D30, 65K05
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Direct Search Methods: Once Scorned, Now Respectable
 In Numerical Analysis 1995 (Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis
, 1995
"... The need to optimize a function whose derivatives are unknown or nonexistent arises in many contexts, particularly in realworld applications. Various direct search methods, most notably the NelderMead `simplex' method, were proposed in the early 1960s for such problems, and have been enormously p ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
The need to optimize a function whose derivatives are unknown or nonexistent arises in many contexts, particularly in realworld applications. Various direct search methods, most notably the NelderMead `simplex' method, were proposed in the early 1960s for such problems, and have been enormously popular with practitioners ever since. Nonetheless, for more than twenty years these methods were typically dismissed or ignored in the mainstream optimization literature, primarily because of the lack of rigorous convergence results. Since 1989, however, direct search methods have been rejuvenated and made respectable. This paper summarizes the history of direct search methods, with special emphasis on the NelderMead method, and describes recent work in this area. This paper is based on a plenary talk given at the Biennial Dundee Conference on Numerical Analysis, Dundee, Scotland, 1995. 1. Introduction Unconstrained optimizationthe problem of minimizing a nonlinear function f(x) for x 2...
Convergence of the NelderMead simplex method to a nonstationary point
 SIAM J. Optim
, 1996
"... . This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continu ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
. This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continuous derivatives. In all the examples the method repeatedly applies the inside contraction step with the best vertex remaining fixed. The simplices tend to a straight line which is orthogonal to the steepest descent direction. It is shown that this behaviour cannot occur for functions with more than three continuous derivatives. The stability of the examples is analysed. Key words. NelderMead method, direct search, simplex, unconstrained minimization AMS subject classifications. 65K05 1. Introduction. Direct search methods are very widely used in chemical engineering, chemistry and medicine. They are a class of optimization methods which are easy to program, do not require derivatives and a...
Detection And Remediation Of Stagnation In The NelderMead Algorithm Using A Sufficient Decrease Condition
 SIAM J. OPTIM
, 1997
"... The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective function is smooth. Failure of this condition is an indicator of potential stagnation. As a remedy we propose a new step, which we call an oriented restart, which reinitializes the simplex to a smaller one with orthogonal edges which contains an approximate steepest descent step from the current best point. We also give results that apply when objective function is a lowamplitude perturbation of a smooth function. We illustrate our results with some numerical examples.
PDS: Direct Search Methods For Unconstrained Optimization On Either Sequential Or Parallel Machines
 Rice University, Department of
, 1992
"... . PDS is a collection of Fortran subroutines for solving unconstrained nonlinear optimization problems using direct search methods. The software is written so that execution on sequential machines is straightforward while execution on Intel distributed memory machines, such as the iPSC/2, the iPSC/8 ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
. PDS is a collection of Fortran subroutines for solving unconstrained nonlinear optimization problems using direct search methods. The software is written so that execution on sequential machines is straightforward while execution on Intel distributed memory machines, such as the iPSC/2, the iPSC/860 or the Touchstone Delta, can be accomplished simply by including a few welldefined routines containing calls to Intelspecific Fortran libraries. Those interested in using the methods on other distributed memory machines, even something as basic as a network of workstations or personal computers, need only modify these few subroutines to handle the global communication requirements. Furthermore, since the parallelism is clearly defined at the "doloop" level, it is a simple matter to insert compiler directives that allow for execution on shared memory parallel machines. Included here is an example of such directives, contained in comment statements, for execution on a Sequent Symmetry S8...
FortifiedDescent Simplicial Search Method: A General Approach
 SIAM J. Optim
, 1995
"... We propose a new simplexbased direct search method for unconstrained minimization of a realvalued function f of n variables. As in other methods of this kind, the intent is to iteratively improve an ndimensional simplex through certain reflection/expansion/contraction steps. The method has three n ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We propose a new simplexbased direct search method for unconstrained minimization of a realvalued function f of n variables. As in other methods of this kind, the intent is to iteratively improve an ndimensional simplex through certain reflection/expansion/contraction steps. The method has three novel features. First, a userchosen integer m k specifies the number of "good" vertices to be retained in constructing the initial trial simplicesreflected, then either expanded or contractedat iteration k. Second, a trial simplex is accepted only when it satisfies the criteria of fortified descent, which are stronger than the criterion of strict descent used in most direct search methods. Third, the number of additional function evaluations needed to check a trial reflected/expanded simplex for fortified descent can be controlled. If one of the initial trial simplices satisfies the fortified descent criteria, it is accepted as the new simplex; otherwise, the simplex is shrunk a fracti...
Documenta Math. 271 Nelder, Mead, and the Other Simplex Method
, 1924
"... optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution o ..."
Abstract
 Add to MetaCart
optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution of complex nonlinear realworld optimization problems. Because obtaining first derivatives of the function f to be optimized was frequently impossible, the strong preference of most practitioners was for a “direct search ” method that required only the values of f; the new Nelder– Mead method fit the bill perfectly. Since then, the Nelder–Mead method has consistently been one of the most used and cited methods for unconstrained optimization. We are fortunate indeed that the late John Nelder 1 has left us a detailed picture of the method’s inspiration and development [11, 14]. For Nelder, the starting point was a 1963 conference talk by William Spendley of Imperial Chemical Industries about a “simplex ” method recently proposed by Spendley, Hext, and Himsworth for response surface exploration [15]. Despite its name, this method is not related to George Dantzig’s simplex method for linear programming, which dates from 1947. Nonetheless, the name is entirely appropriate because the Spendley, Hext, and Himsworth method is defined by a simplex; the method constructs a pattern of n + 1 points in dimension n, which moves across the surface to be explored, sometimes changing size, but always retaining the same shape. Inspired by Spendley’s talk, Nelder had what he describes as “one useful new idea”: while defining each iteration via a simplex, add the crucial ingredient that the shape of the simplex should “adapt itself to the local landscape ” [12]. During a sequence of lively discussions with his colleague Roger Mead, where “each of us [was] able to try out the ideas of the previous evening on the other the following morning”, they developed a method in which the simplex could “elongate itself to move down long gentle slopes”, or “contract itself on to the final minimum ” [11]. And, as they say, the rest is history.
Docteur de l’Université ParisEst Spécialité: Sciences de l’Univers et Environnement
, 2013
"... présentée pour obtenir le grade de ..."