Results 1  10
of
14
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 124 (13 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Clustering with a Genetically Optimized Approach
 IEEE Transactions on Evolutionary Computation
, 1999
"... This paper describes a genetically guided approach to optimizing the hard (J1) and fuzzy (Jm) cmeans functionals used in cluster analysis. Our experiments show that a genetic algorithm ameliorates the difficulty of choosing an initialization for the cmeans clustering algorithms. Experiments use si ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
This paper describes a genetically guided approach to optimizing the hard (J1) and fuzzy (Jm) cmeans functionals used in cluster analysis. Our experiments show that a genetic algorithm ameliorates the difficulty of choosing an initialization for the cmeans clustering algorithms. Experiments use six data sets, including the Iris data, magnetic resonance and color images. The genetic algorithm approach is generally able to find the lowest known Jm value or a Jm associated with a partition very similar to that associated with the lowest Jm value. On data sets with several local extrema, the GA approach always avoids the less desirable solutions. Degenerate partitions are always avoided by the GA approach, which provides an effiective method for optimizing clustering models whose objective function can be represented in terms of cluster centers. The time cost of genetic guided clustering is shown to make a series random initializations of fuzzy/hard cmeans, where the partition a...
On The Convergence Of The Multidirectional Search Algorithm
, 1991
"... . This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit ca ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
. This paper presents the convergence analysis for the multidirectional search algorithm, a direct search method for unconstrained minimization. The analysis follows the classic lines of proofs of convergence for gradientrelated methods. The novelty of the argument lies in the fact that explicit calculation of the gradient is unnecessary, although it is assumed that the function is continuously differentiable over some subset of the domain. The proof can be extended to treat most nonsmooth cases of interest; the argument breaks down only at points where the derivative exists but is not continuous. Finally, it is shown how a general convergence theory can be developed for an entire class of direct search methodswhich includes such methods as the factorial design algorithm and the pattern search algorithmthat share a key feature of the multidirectional search algorithm. Key words. unconstrained optimization, convergence analysis, direct search methods, parallel optimization, mult...
Convergence of the NelderMead simplex method to a nonstationary point
 SIAM J. Optim
, 1996
"... . This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continu ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
. This paper analyses the behaviour of the NelderMead simplex method for a family of examples which cause the method to converge to a nonstationary point. All the examples use continuous functions of two variables. The family of functions contains strictly convex functions with up to three continuous derivatives. In all the examples the method repeatedly applies the inside contraction step with the best vertex remaining fixed. The simplices tend to a straight line which is orthogonal to the steepest descent direction. It is shown that this behaviour cannot occur for functions with more than three continuous derivatives. The stability of the examples is analysed. Key words. NelderMead method, direct search, simplex, unconstrained minimization AMS subject classifications. 65K05 1. Introduction. Direct search methods are very widely used in chemical engineering, chemistry and medicine. They are a class of optimization methods which are easy to program, do not require derivatives and a...
Detection And Remediation Of Stagnation In The NelderMead Algorithm Using A Sufficient Decrease Condition
 SIAM J. OPTIM
, 1997
"... The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective function is smooth. Failure of this condition is an indicator of potential stagnation. As a remedy we propose a new step, which we call an oriented restart, which reinitializes the simplex to a smaller one with orthogonal edges which contains an approximate steepest descent step from the current best point. We also give results that apply when objective function is a lowamplitude perturbation of a smooth function. We illustrate our results with some numerical examples.
Tuning topology generators using spectral distributions
 In Lecture Notes in Computer Science, Volume 5119, SPEC International Performance Evaluation Workshop
, 2008
"... Abstract. An increasing number of synthetic topology generators are available, each claiming to produce representative Internet topologies. Every generator has its own parameters, allowing the user to generate topologies with different characteristics. However, there exist no clear guidelines on tun ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abstract. An increasing number of synthetic topology generators are available, each claiming to produce representative Internet topologies. Every generator has its own parameters, allowing the user to generate topologies with different characteristics. However, there exist no clear guidelines on tuning the value of these parameters in order to obtain a topology with specific characteristics. In this paper we optimize the parameters of several topology generators to match a given Internet topology. The optimization is performed either with respect to the link density, or to the spectrum of the normalized Laplacian matrix. Contrary to approaches in the literature that rely only on the largest eigenvalues, we take into account the set of all eigenvalues. However, we show that on their own the eigenvalues cannot be used to construct a metric for optimizing parameters. Instead we present a weighted spectral method which simultaneously takes into account all the properties of the graph. Keywords: Internet Topology, Graph Spectrum. 1
Evaluating the Uncertainty in Water Quality Predictions  A Case Study
"... A method for assessing model result uncertainty is presented and applied to a case where a paper mill wastewater is discharged into an estuary in the Southeastern U.S.. Model result uncertainty was quantified by incorporating the uncertainty analysis into model calibration. The twodimensional, late ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
A method for assessing model result uncertainty is presented and applied to a case where a paper mill wastewater is discharged into an estuary in the Southeastern U.S.. Model result uncertainty was quantified by incorporating the uncertainty analysis into model calibration. The twodimensional, laterally averaged model CEQUALW2 was used to predict water quality conditions. The water quality model was calibrated against field measurements of longitudinal and vertical variations in salinity, dissolved oxygen, and biochemical oxygendemand(BOD) concentrations. A quantitative, multiconstituent criteria for acceptable calibration was used to identify plausible parameter sets. A collection of plausible parameter sets was then identified, and used to assess the uncertainty in dissolved oxygen prediction, and the uncertainty in predicted system response to a reduction in organic matter loading. A search procedure was also developed to minimize the calibration criteria statistic and to assess the range of model predictions. Plausible parameter sets differed widely in their parameter values, and they produced widely different dissolved oxygen concentration predictions. The system response to reduced loading, however, was found to be very similar between the plausible parameter sets.
Comparison Of Response Surface Methodology And The Nelder And Mead Simplex Method For Optimization In Microsimulation Models
"... Microsimulation models are increasingly used in the evaluation of cancer screening. Latent parameters of such models can be estimated by optimization of the goodnessoffit. We compared the efficiency and accuracy of the Response Surface Methodology and the Nelder and Mead Simplex Method for optimiz ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Microsimulation models are increasingly used in the evaluation of cancer screening. Latent parameters of such models can be estimated by optimization of the goodnessoffit. We compared the efficiency and accuracy of the Response Surface Methodology and the Nelder and Mead Simplex Method for optimization of microsimulation models. To this end, we tested several automated versions of both methods on a small microsimulation model, as well as on a standard set of test functions. With respect to accuracy, Response Surface Methodology performed better in case of optimization of the microsimulation model, whereas the results for the test functions were rather variable. The Nelder and Mead Simplex Method performed more efficiently than Response Surface Methodology, both for the microsimulation model and the test functions. Keywords: Optimization; Simulation; Health 2 1
Adaptive Extensions of the Nelder and Mead Simplex Method for Optimization of Stochastic Simulation Models
, 2000
"... This paper investigates the performance of adaptive extensions of the Nelder and Mead simplex method for optimization of stochastic simulation models. Although the Nelder and Mead simplex method was originally designed for optimization of deterministic multidimensional functions (Nelder and Mead, 19 ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This paper investigates the performance of adaptive extensions of the Nelder and Mead simplex method for optimization of stochastic simulation models. Although the Nelder and Mead simplex method was originally designed for optimization of deterministic multidimensional functions (Nelder and Mead, 1965), it is frequently used for the optimization of stochastic objective functions. In particular, this method can be used for the optimization of stochastic simulation models, where one tries to estimate the model parameters that optimize some specific stochastic output of the simulation model. In this optimization procedure, the stochastic simulation model is often considered as a blackbox model (Pug, 1996) where the output of the simulation model can be regarded as a stochastic function of the model parameters. The goal of this investigation is to find optimization methods that can be used for stochastic simulation models for which the corresponding stochastic objective function has the following characteristics (Wright, 1996): ...