Results 1 
8 of
8
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
Documenta Math. 271 Nelder, Mead, and the Other Simplex Method
, 1924
"... optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution o ..."
Abstract
 Add to MetaCart
optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution of complex nonlinear realworld optimization problems. Because obtaining first derivatives of the function f to be optimized was frequently impossible, the strong preference of most practitioners was for a “direct search ” method that required only the values of f; the new Nelder– Mead method fit the bill perfectly. Since then, the Nelder–Mead method has consistently been one of the most used and cited methods for unconstrained optimization. We are fortunate indeed that the late John Nelder 1 has left us a detailed picture of the method’s inspiration and development [11, 14]. For Nelder, the starting point was a 1963 conference talk by William Spendley of Imperial Chemical Industries about a “simplex ” method recently proposed by Spendley, Hext, and Himsworth for response surface exploration [15]. Despite its name, this method is not related to George Dantzig’s simplex method for linear programming, which dates from 1947. Nonetheless, the name is entirely appropriate because the Spendley, Hext, and Himsworth method is defined by a simplex; the method constructs a pattern of n + 1 points in dimension n, which moves across the surface to be explored, sometimes changing size, but always retaining the same shape. Inspired by Spendley’s talk, Nelder had what he describes as “one useful new idea”: while defining each iteration via a simplex, add the crucial ingredient that the shape of the simplex should “adapt itself to the local landscape ” [12]. During a sequence of lively discussions with his colleague Roger Mead, where “each of us [was] able to try out the ideas of the previous evening on the other the following morning”, they developed a method in which the simplex could “elongate itself to move down long gentle slopes”, or “contract itself on to the final minimum ” [11]. And, as they say, the rest is history.
Unconstrained DerivativeFree Optimization by Successive Approximation 1
"... We present an algorithmic framework for unconstrained derivativefree optimization based on dividing the search space in regions (partitions). Every partition is assigned a representative point. The representative points form a grid. A piecewise constant approximation to the function subject to opt ..."
Abstract
 Add to MetaCart
We present an algorithmic framework for unconstrained derivativefree optimization based on dividing the search space in regions (partitions). Every partition is assigned a representative point. The representative points form a grid. A piecewise constant approximation to the function subject to optimization is constructed using a partitioning and its corresponding grid. The convergence of the framework to a stationary point of a continuously differentiable function is guaranteed under mild assumptions. The proposed framework is appropriate for upgrading heuristics that lack mathematical analysis into algorithms that guarantee convergence to a local minimizer. A convergent variant of the NelderMead algorithm that conforms to the given framework is constructed. The algorithm is compared to two previously published convergent variants of the NM algorithm. The comparison is conducted on the MoréGarbowHillstrom set of test problems and on four variably dimensional functions with dimension up to 100. The results of the comparison show that the proposed algorithm outperforms both previously published algorithms. Key words: unconstrained minimization, direct search, successive approximation, grid, simplex
Sprouting
"... search an algorithmic framework for asynchronous parallel unconstrained optimization ..."
Abstract
 Add to MetaCart
search an algorithmic framework for asynchronous parallel unconstrained optimization
DOI: 10.1007/s105890053912z Grid Restrained NelderMead Algorithm
, 2004
"... Abstract. Probably the most popular algorithm for unconstrained minimization for problems of moderate dimension is the NelderMead algorithm published in 1965. Despite its age only limited convergence results exist. Several counterexamples can be found in the literature for which the algorithm perfo ..."
Abstract
 Add to MetaCart
Abstract. Probably the most popular algorithm for unconstrained minimization for problems of moderate dimension is the NelderMead algorithm published in 1965. Despite its age only limited convergence results exist. Several counterexamples can be found in the literature for which the algorithm performs badly or even fails. A convergent variant derived from the original NelderMead algorithm is presented. The proposed algorithm’s convergence is based on the principle of grid restrainment and therefore does not require sufficient descent as the recent convergent variant proposed by Price, Coope, and Byatt. Convergence properties of the proposed gridrestrained algorithm are analysed. Results of numerical testing are also included and compared to the results of the algorithm proposed by Price et al. The results clearly demonstrate that the proposed gridrestrained algorithm is an efficient direct search method. Keywords: unconstrained minimization, NelderMead algorithm, direct search, simplex, grid
ANZIAM J. 45(2004), 401–422 MAXIMISING OUTPUT FROM OIL RESERVOIRS WITHOUT WATER BREAKTHROUGH
, 2003
"... Often in oil reservoirs a layer of water lies under the layer of oil. The suction pressure due to a distribution of oil wells will cause the oilwater interface to rise up towards the wells. Given a particular distribution of oil wells, we are interested in finding the flow rates of each well that m ..."
Abstract
 Add to MetaCart
Often in oil reservoirs a layer of water lies under the layer of oil. The suction pressure due to a distribution of oil wells will cause the oilwater interface to rise up towards the wells. Given a particular distribution of oil wells, we are interested in finding the flow rates of each well that maximise the total flow rate without the interface breaking through to the wells. A method for finding optimal flow rates is developed using the Muskat model to approximate the interface height, and a version of the NelderMead simplex method for optimisation. A variety of results are presented, including the perhaps nonintuitive result that it is better to turn off some oil wells when they are sufficiently close to one another. 1.