Results 1  10
of
50
Convergence Properties of the NelderMead Simplex Method in Low Dimensions
 SIAM Journal of Optimization
, 1998
"... Abstract. The Nelder–Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder–Mead algorithm. This paper pr ..."
Abstract

Cited by 248 (3 self)
 Add to MetaCart
Abstract. The Nelder–Mead simplex algorithm, first published in 1965, is an enormously popular direct search method for multidimensional unconstrained minimization. Despite its widespread use, essentially no theoretical results have been proved explicitly for the Nelder–Mead algorithm. This paper presents convergence properties of the Nelder–Mead algorithm applied to strictly convex functions in dimensions 1 and 2. We prove convergence to a minimizer for dimension 1, and various limited convergence results for dimension 2. A counterexample of McKinnon gives a family of strictly convex functions in two dimensions and a set of initial conditions for which the Nelder–Mead algorithm converges to a nonminimizer. It is not yet known whether the Nelder–Mead method can be proved to converge to a minimizer for a more specialized class of convex functions in two dimensions. Key words. direct search methods, Nelder–Mead simplex methods, nonderivative optimization AMS subject classifications. 49D30, 65K05
On the Convergence of Pattern Search Algorithms
"... . We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern sea ..."
Abstract

Cited by 149 (14 self)
 Add to MetaCart
. We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern search methods to establish a global convergence theory that does not enforce a notion of sufficient decrease. Our analysis is possible because the iterates of a pattern search method lie on a scaled, translated integer lattice. This allows us to relax the classical requirements on the acceptance of the step, at the expense of stronger conditions on the form of the step, and still guarantee global convergence. Key words. unconstrained optimization, convergence analysis, direct search methods, globalization strategies, alternating variable search, axial relaxation, local variation, coordinate search, evolutionary operation, pattern search, multidirectional search, downhill simplex search AMS(M...
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Direct Search Methods On Parallel Machines
 SIAM Journal on Optimization
, 1991
"... . This paper describes an approach to constructing derivativefree algorithms for unconstrained optimization that are easy to implement on parallel machines. A special feature of this approach is the ease with which algorithms can be generated to take advantage of any number of processors and to ada ..."
Abstract

Cited by 111 (22 self)
 Add to MetaCart
. This paper describes an approach to constructing derivativefree algorithms for unconstrained optimization that are easy to implement on parallel machines. A special feature of this approach is the ease with which algorithms can be generated to take advantage of any number of processors and to adapt to any cost ratio of communication to function evaluation. Numerical tests show speedups on two fronts. The cost of synchronization being minimal, the speedup is almost linear with the addition of more processors, i.e., given a problem and a search strategy, the decrease in execution time is proportional to the number of processors added. Even more encouraging, however, is that different search strategies, devised to take advantage of additional (or more powerful) processors, may actually lead to dramatic improvements in the performance of the basic algorithm. Thus search strategies intended for many processors actually may generate algorithms that are better even when implemented seque...
Direct Search Methods: Once Scorned, Now Respectable
 In Numerical Analysis 1995 (Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis
, 1995
"... The need to optimize a function whose derivatives are unknown or nonexistent arises in many contexts, particularly in realworld applications. Various direct search methods, most notably the NelderMead `simplex' method, were proposed in the early 1960s for such problems, and have been enormously p ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
The need to optimize a function whose derivatives are unknown or nonexistent arises in many contexts, particularly in realworld applications. Various direct search methods, most notably the NelderMead `simplex' method, were proposed in the early 1960s for such problems, and have been enormously popular with practitioners ever since. Nonetheless, for more than twenty years these methods were typically dismissed or ignored in the mainstream optimization literature, primarily because of the lack of rigorous convergence results. Since 1989, however, direct search methods have been rejuvenated and made respectable. This paper summarizes the history of direct search methods, with special emphasis on the NelderMead method, and describes recent work in this area. This paper is based on a plenary talk given at the Biennial Dundee Conference on Numerical Analysis, Dundee, Scotland, 1995. 1. Introduction Unconstrained optimizationthe problem of minimizing a nonlinear function f(x) for x 2...
Pattern Search Algorithms for Bound Constrained Minimization
 ICASE, NASA LANGLEY RESEARCH
, 1996
"... We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to devel ..."
Abstract

Cited by 70 (16 self)
 Add to MetaCart
We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.
Direct search methods: then and now
, 2000
"... We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized by the absence of the construction of a model of the objective. We then consider a number of the classical direct search methods and discuss what research in the intervening years has uncovered about these algorithms. In particular, while the original direct search methods were consciously based on straightforward heuristics, more recent analysis has shown that in most — but not all — cases these heuristics actually
On The Convergence Of The Multidirectional Search Algorithm
, 1991
"... . This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit ca ..."
Abstract

Cited by 61 (9 self)
 Add to MetaCart
. This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit calculation of the gradient is unnecessary, although it is assumed that the function is continuously differentiable over some subset of the domain. The proof can be extended to treat most nonsmooth cases of interest; the argument breaks down only at points where the derivative exists but is not continuous. Finally, it is shown how a general convergence theory can be developed for an entire class of direct search methodswhich includes such methods as the factorial design algorithm and the pattern search algorithmthat share a key feature of the multidirectional search algorithm. Key words. unconstrained optimization, convergence analysis, direct search methods, parallel optimization, mult...
Convergence of the NelderMead simplex method to a nonstationary point
 SIAM J. Optim
, 1996
"... . This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continu ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
. This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continuous derivatives. In all the examples the method repeatedly applies the inside contraction step with the best vertex remaining fixed. The simplices tend to a straight line which is orthogonal to the steepest descent direction. It is shown that this behaviour cannot occur for functions with more than three continuous derivatives. The stability of the examples is analysed. Key words. NelderMead method, direct search, simplex, unconstrained minimization AMS subject classifications. 65K05 1. Introduction. Direct search methods are very widely used in chemical engineering, chemistry and medicine. They are a class of optimization methods which are easy to program, do not require derivatives and a...
THE ACCURACY OF FLOATING POINT SUMMATION
, 1993
"... The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
The usual recursive summation technique is just one of several ways of computing the sum of n floating point numbers. Five summation methods and their variations are analyzed here. The accuracy of the methods is compared using rounding error analysis and numerical experiments. Four ofthe methods are shown to be special cases of a general class of methods, and an error analysis is given for this class. No one method is uniformly more accurate than the others, but some guidelines are givenon the choice of method in particular cases.