Results 1  10
of
71
Derivativefree optimization: A review of algorithms and comparison of software implementations
"... ..."
On the Geometry Phase in ModelBased Algorithms for DerivativeFree Optimization
, 2008
"... A numerical study of modelbased methods for derivativefree optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
A numerical study of modelbased methods for derivativefree optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether (and therefore does not attempt to control the position of the interpolation set). Data is presented describing the evolution of the condition number of the interpolation matrix and the accuracy of the gradient estimate. The experiments are performed on smooth unconstrained optimization problems with dimensions ranging between 2 and 15.
Incorporating minimum Frobenius norm models in direct search
 Computational Optimization and Applications
"... The goal of this paper is to show that the use of minimum Frobenius norm quadratic models can improve the performance of directsearch methods. The approach taken here is to maintain the structure of directional directsearch methods, organized around a search and a poll step, and to use the set of ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
The goal of this paper is to show that the use of minimum Frobenius norm quadratic models can improve the performance of directsearch methods. The approach taken here is to maintain the structure of directional directsearch methods, organized around a search and a poll step, and to use the set of previously evaluated points generated during a directsearch run to build the models. The minimization of the models within a trust region provides an enhanced search step. Our numerical results show that such a procedure can lead to a signicant improvement of direct search for smooth, piecewise smooth, and stochastic and nonstochastic noisy problems. 1
ORBIT: Optimization by radial basis function interpolation in trustregions
 SIAM Journal on Scientific Computing
, 2008
"... Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points are sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDEbased bioremediation plan, are also very encouraging and support ORBIT’s effectiveness on blackbox functions for which no special mathematical structure is known or available.
Direct Multisearch for Multiobjective Optimization
, 2010
"... In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of blackbox type, preventing the use of derivativebased techniques. We propose a novel multiobjective derivativefree metho ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of blackbox type, preventing the use of derivativebased techniques. We propose a novel multiobjective derivativefree methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of directsearch methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all directsearch methods of directional type. We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary
MNH: A DerivativeFree Optimization Algorithm Using Minimal Norm Hessians
, 2008
"... We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolat ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolates the function at a set of data points. We show how to construct this interpolation set to yield computationally stable parameters for the model and, in doing so, obtain an algorithm which converges to firstorder critical points. Preliminary results are encouraging and show that MNH makes effective use of the points evaluated in the course of the optimization. 1
LIBOPT – An environment for testing solvers on heterogeneous collections of problems – The manual, version 2.1
 INRIA, BP 105, 78153 Le Chesnay
"... The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collectio ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
The Libopt environment is both a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections. These collections can be heterogeneous in the sense that their problems can have common features that differ from one collection to the other. Libopt brings a unified view on this composite world by offering, for example, the possibility to run any solver on any problem compatible with it, using the same Unix/Linux command. The environment also provides tools for comparing the results obtained by solvers on a specified set of problems. Most of the scripts going with the Libopt environment have been written in Perl.
PSwarm: A Hybrid Solver for Linearly Constrained Global DerivativeFree Optimization
, 2009
"... PSwarm was developed originally for the global optimization of functions without derivatives and where the variables are within upper and lower bounds. The underlying algorithm used is a pattern search method, or more specifically, a coordinate search method, which guarantees convergence to stationa ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
PSwarm was developed originally for the global optimization of functions without derivatives and where the variables are within upper and lower bounds. The underlying algorithm used is a pattern search method, or more specifically, a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the (optional) search step of coordinate search, the algorithm incorporates a particle swarm scheme for dissemination of points in the feasible region, equipping the overall method with the capability of finding a global minimizer. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods based only on function values. PSwarm is extended is this paper to handle general linear constraints. The poll step now incorporates positive generators for the tangent cone of the approximated active constraints, including a provision for the degenerate case. The search step has also been adapted accordingly. In particular, the initial population for particle swarm used in the search step is computed by first inscribing an ellipsoid of maximum volume to the feasible set. We have again compared PSwarm to other solvers (including some designed for global optimization) and the results confirm its competitiveness in terms of efficiency and robustness.
Smoothing and WorstCase Complexity for DirectSearch Methods in Nonsmooth Optimization
, 2012
"... In the context of the derivativefree optimization of a smooth objective function, it has been shown that the worst case complexity of directsearch methods is of the same order as the one of steepest descent for derivativebased optimization, more precisely that the number of iterations needed to r ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In the context of the derivativefree optimization of a smooth objective function, it has been shown that the worst case complexity of directsearch methods is of the same order as the one of steepest descent for derivativebased optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportional to the inverse of the threshold squared. Motivated by the lack of such a result in the nonsmooth case, we propose, analyze, and test a class of smoothing directsearch methods for the unconstrained optimization of nonsmooth functions. Given a parameterized family of smoothing functions for the nonsmooth objective function dependent on a smoothing parameter, this class of methods consists of applying a directsearch algorithm for a fixed value of the smoothing parameter until the step size is relatively small, after which the smoothing parameter is reduced and the process is repeated. One can show that the worst case complexity (or cost) of this procedure is roughly one order of magnitude worse than the one for direct search or steepest descent on smooth functions. The class of smoothing directsearch methods is also showed to enjoy asymptotic global convergence properties. Some preliminary numerical experiments indicates that this approach leads to better values of the objective function, pushing in some cases the optimization further, apparently without an additional cost in the number of function evaluations.