Results 1  10
of
14
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 129 (13 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
A dataset and evaluation methodology for templatebased tracking algorithms
 In IEEE Int. Symp on Mixed and Augmented Reality, ISMAR’09
, 2009
"... Unlike dense stereo, optical flow or multiview stereo, templatebased tracking lacks benchmark datasets allowing a fair comparison between stateoftheart algorithms. Until now, in order to evaluate objectively and quantitatively the performance and the robustness of templatebased tracking algorit ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Unlike dense stereo, optical flow or multiview stereo, templatebased tracking lacks benchmark datasets allowing a fair comparison between stateoftheart algorithms. Until now, in order to evaluate objectively and quantitatively the performance and the robustness of templatebased tracking algorithms, mainly synthetically generated image sequences were used. The evaluation is therefore often intrinsically biased. In this paper, we describe the process we carried out to perform the acquisition of real scene image sequences with very precise and accurate ground truth poses using an industrial camera rigidly mounted on the endeffector of a highprecision robotic measurement arm. For the acquisition, we considered most of the critical parameters that influence the tracking results such as: the texture richness and the texture repeatability of the objects to be tracked, the camera motion and speed, and the changes of the object scale in the images and variations of the lighting conditions over time. We designed an evaluation scheme for object detection and interframe tracking algorithms and used the image sequences to apply this scheme to several stateoftheart algorithms. The image sequences will be made freely available for testing, submitting and evaluating new templatebased tracking algorithms, i.e. algorithms that detect or track a planar object in an image sequence given only one image of the object (called the template). 1
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
Convergence of the restricted NelderMead algorithm in two dimensions, in preparation
, 1997
"... The Nelder–Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalarvalued function f of n real variables using only function values, without any derivative information. Each Nelder–Mead iteration is associated with a nond ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Nelder–Mead algorithm, a longstanding direct search method for unconstrained optimization published in 1965, is designed to minimize a scalarvalued function f of n real variables using only function values, without any derivative information. Each Nelder–Mead iteration is associated with a nondegenerate simplex defined by n + 1 vertices and their function values; a typical iteration produces a new simplex by replacing the worst vertex by a new point. Despite the method’s widespread use, theoretical results have been limited: for strictly convex objective functions of one variable with bounded level sets, the algorithm always converges to the minimizer; for such functions of two variables, the diameter of the simplex converges to zero, but examples constructed by McKinnon show that the algorithm may converge to a nonminimizing point. This paper considers the restricted Nelder–Mead algorithm, a variant that does not allow expansion steps. In two dimensions we show that, for any nondegenerate starting simplex and any twicecontinuously differentiable function with positive definite Hessian and bounded level sets, the algorithm always converges to the minimizer. The proof is based on treating the method as a discrete dynamical system, and relies on several techniques that are nonstandard in convergence proofs for unconstrained optimization. 1
Documenta Math. 271 Nelder, Mead, and the Other Simplex Method
, 1924
"... optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution o ..."
Abstract
 Add to MetaCart
optimization, nonderivative optimization In the mid1960s, two English statisticians working at the National Vegetable Research Station invented the Nelder–Mead “simplex ” direct search method. The method emerged at a propitious time, when there was great and growing interest in computer solution of complex nonlinear realworld optimization problems. Because obtaining first derivatives of the function f to be optimized was frequently impossible, the strong preference of most practitioners was for a “direct search ” method that required only the values of f; the new Nelder– Mead method fit the bill perfectly. Since then, the Nelder–Mead method has consistently been one of the most used and cited methods for unconstrained optimization. We are fortunate indeed that the late John Nelder 1 has left us a detailed picture of the method’s inspiration and development [11, 14]. For Nelder, the starting point was a 1963 conference talk by William Spendley of Imperial Chemical Industries about a “simplex ” method recently proposed by Spendley, Hext, and Himsworth for response surface exploration [15]. Despite its name, this method is not related to George Dantzig’s simplex method for linear programming, which dates from 1947. Nonetheless, the name is entirely appropriate because the Spendley, Hext, and Himsworth method is defined by a simplex; the method constructs a pattern of n + 1 points in dimension n, which moves across the surface to be explored, sometimes changing size, but always retaining the same shape. Inspired by Spendley’s talk, Nelder had what he describes as “one useful new idea”: while defining each iteration via a simplex, add the crucial ingredient that the shape of the simplex should “adapt itself to the local landscape ” [12]. During a sequence of lively discussions with his colleague Roger Mead, where “each of us [was] able to try out the ideas of the previous evening on the other the following morning”, they developed a method in which the simplex could “elongate itself to move down long gentle slopes”, or “contract itself on to the final minimum ” [11]. And, as they say, the rest is history.
SIAM REVIEW c ○ 2003 Society for Industrial and Applied Mathematics Vol. 45, No. 3, pp. 385–482 Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods ∗
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Unconstrained DerivativeFree Optimization by Successive Approximation 1
"... We present an algorithmic framework for unconstrained derivativefree optimization based on dividing the search space in regions (partitions). Every partition is assigned a representative point. The representative points form a grid. A piecewise constant approximation to the function subject to opt ..."
Abstract
 Add to MetaCart
We present an algorithmic framework for unconstrained derivativefree optimization based on dividing the search space in regions (partitions). Every partition is assigned a representative point. The representative points form a grid. A piecewise constant approximation to the function subject to optimization is constructed using a partitioning and its corresponding grid. The convergence of the framework to a stationary point of a continuously differentiable function is guaranteed under mild assumptions. The proposed framework is appropriate for upgrading heuristics that lack mathematical analysis into algorithms that guarantee convergence to a local minimizer. A convergent variant of the NelderMead algorithm that conforms to the given framework is constructed. The algorithm is compared to two previously published convergent variants of the NM algorithm. The comparison is conducted on the MoréGarbowHillstrom set of test problems and on four variably dimensional functions with dimension up to 100. The results of the comparison show that the proposed algorithm outperforms both previously published algorithms. Key words: unconstrained minimization, direct search, successive approximation, grid, simplex
DOI: 10.1007/s105890053912z Grid Restrained NelderMead Algorithm
, 2004
"... Abstract. Probably the most popular algorithm for unconstrained minimization for problems of moderate dimension is the NelderMead algorithm published in 1965. Despite its age only limited convergence results exist. Several counterexamples can be found in the literature for which the algorithm perfo ..."
Abstract
 Add to MetaCart
Abstract. Probably the most popular algorithm for unconstrained minimization for problems of moderate dimension is the NelderMead algorithm published in 1965. Despite its age only limited convergence results exist. Several counterexamples can be found in the literature for which the algorithm performs badly or even fails. A convergent variant derived from the original NelderMead algorithm is presented. The proposed algorithm’s convergence is based on the principle of grid restrainment and therefore does not require sufficient descent as the recent convergent variant proposed by Price, Coope, and Byatt. Convergence properties of the proposed gridrestrained algorithm are analysed. Results of numerical testing are also included and compared to the results of the algorithm proposed by Price et al. The results clearly demonstrate that the proposed gridrestrained algorithm is an efficient direct search method. Keywords: unconstrained minimization, NelderMead algorithm, direct search, simplex, grid
Sprouting
"... search an algorithmic framework for asynchronous parallel unconstrained optimization ..."
Abstract
 Add to MetaCart
search an algorithmic framework for asynchronous parallel unconstrained optimization