## A Grid Algorithm for Bound Constrained Optimization of Noisy Functions (1995)

### Cached

### Download Links

Venue: | IMA J. of Numerical Analysis |

Citations: | 27 - 2 self |

### BibTeX

@ARTICLE{Elster95agrid,

author = {Clemens Elster and Physikalisch-technische Bundesanstalt and Arnold Neumaier},

title = {A Grid Algorithm for Bound Constrained Optimization of Noisy Functions},

journal = {IMA J. of Numerical Analysis},

year = {1995},

pages = {585--608}

}

### Years of Citing Articles

### OpenURL

### Abstract

noisy functions

### Citations

1371 |
A simplex method for function minimization
- Nelder, Mead
- 1965
(Show Context)
Citation Context ...sion depends on x. For noisy function optimization, the usually recommended method (e.g., Nocedal [14], p.202) and certainly the most used one (cf. Powell [15]) is the simplex method of Nelder & Mead =-=[13]-=- which makes no smoothness assumptions but has poor convergence properties. 2In this paper we propose an algorithm based on the use of quadratic models minimized over adaptively refined trust regions... |

1346 |
Practical Optimization
- E, Murray, et al.
- 1981
(Show Context)
Citation Context ...eved that quasiNewton methods using a finite-difference approximation of the gradient are most efficient for the task of optimizing smooth functions when only function values are available [4], [14], =-=[6]-=-. In the case of substantial noise, however, the obtained function values show a discontinuous behaviour, and quasi-Newton methods using a finite-difference estimate of the gradient break down. Gill e... |

1081 |
Practical Methods of Optimization
- Fletcher
- 1981
(Show Context)
Citation Context ... it is believed that quasiNewton methods using a finite-difference approximation of the gradient are most efficient for the task of optimizing smooth functions when only function values are available =-=[4]-=-, [14], [6]. In the case of substantial noise, however, the obtained function values show a discontinuous behaviour, and quasi-Newton methods using a finite-difference estimate of the gradient break d... |

208 | Global Optimization - Torn, Zilinskas - 1989 |

148 | On the convergence of pattern search algorithms - Torczon - 1997 |

138 |
On the experimental attainment of optimum conditions
- Box, Wilson
- 1951
(Show Context)
Citation Context ... n ≤ 12, computing time for the linear algebra remains in the seconds on a SUN SPARC station, and does not matter for the applications to expensive function minimization. (The methods of Box & Wilson =-=[1]-=-, Dixon [3], and Glad & Goldstein [7] avoid expensive linear algebra by evaluating the function on well-chosen experimental designs for which the least squares problem can be solved more cheaply expli... |

83 | Theory of Algorithms for Unconstrained Optimization - Nocedal - 1992 |

78 |
Recent developments in algorithms and software for trust region m ethods
- Moré
- 1983
(Show Context)
Citation Context ...viour of our method depends very little on the way the trust region is adapted, and convergence is not at all affected. More sophisticated strategies assessing the quality of the prediction (see Moré =-=[10]-=-) which are useful for the case when exact gradients are available, have no significant effect on the number of function evaluations needed: The inaccurate gradient obtained from the model fit reduces... |

62 | Multi-directional search: a direct search algorithm for parallel machines - Torczon - 1989 |

62 | On the convergence of the multidirectional search algorithm
- Torczon
- 1991
(Show Context)
Citation Context ...iseless case, our algorithm converges to the set of stationary points, thus improving on convergence results for other optimization algorithms using (noiseless) function evaluations only (see Torczon =-=[19, 20]-=- and references quoted there) which only prove the existence of a subsequence converging to a stationary point. In the noisy case, we are able to prove that our algorithm generates at least one point ... |

52 |
A direct search optimization method that models the objective and constraint functions by linear interpolation
- Powell
- 1994
(Show Context)
Citation Context ...ate must be repeated regularly when the precision depends on x. For noisy function optimization, the usually recommended method (e.g., Nocedal [14], p.202) and certainly the most used one (cf. Powell =-=[15]-=-) is the simplex method of Nelder & Mead [13] which makes no smoothness assumptions but has poor convergence properties. 2In this paper we propose an algorithm based on the use of quadratic models mi... |

48 |
A stochastic method for global optimization
- Boender, Kan, et al.
- 1982
(Show Context)
Citation Context ... techniques which try to estimate gradients, and therefore often move to the closest minimizer, and also with stochastic techniques (cf. Törn & ˇ Zilinskas [17], Byrd et al. [2], Rinnooy Kan & Timmer =-=[16]-=-), which attempt to obtain a global optimizer but typically need thousands of function values even for low-dimensional problems. Section 2 gives the motivation and details of the algorithm. As current... |

24 |
A trust-region approach to linearly constrained optimization
- Gay
- 1984
(Show Context)
Citation Context ...umerical tests of the above proposed algorithm were performed using the 18 functionsfromastandardtestsetasdescribedinMoréetal. [11]. Thedimensions and bounds (see Table I) were chosen as given in Gay =-=[5]-=-. Table I: Bounds specifying Ω for the test problems (from Gay [5]; exponents indicate repetition) problem dim lower bounds upper bounds 7 3 −100,−1,−1 .8,1,1 18 6 03 ,1,0,0 2,8,1,7,5,5 9 3 .398,1,−.5... |

13 |
Optimization of functions whose values are subject to small errors
- Glad, Goldstein
- 1977
(Show Context)
Citation Context ...ove that our algorithm generates at least one point with a gradient of an order of magnitude which is optimal for a given noise level. The only previous result of this type is due to Glad & Goldstein =-=[7]-=-, but our assumptions for sufficient decrease are much weaker than theirs. Our proof techniques are based on arguments of the type used in the papers cited above, combined and extended to give the sha... |

12 | Concurrent stochastic methods for global optimization - Byrd, Dert, et al. - 1990 |

6 |
ACSIM: an accelerated constrained simplex technique
- Dixon
- 1973
(Show Context)
Citation Context ...tion of parameters in experiments or finite element calculations, reasonable function values can often be computed in the feasible region only. Very recently, we also learned of an old paper of Dixon =-=[3]-=- which combines a simplex technique similar to [13] with the use of occasional quadratic models, estimated by some kind of finite differences (Dixon’s modules 5 and 6). The method works in the presenc... |

5 |
Efficient optimization of computationally expensive objective functions
- Karidis, Turns
- 1984
(Show Context)
Citation Context ...egy in practice). InStep1, aquadraticapproximationoftheunderlyingfunctionf isbuiltfrom thefunctionvaluesalreadyknown, andusedfor predictionofprogress. (Thisidea is not new; see, e.g., Karidis & Turns =-=[9]-=-, though they describe the details of their implementation in vague terms only, and our best interpretation of it turned out not to give a very robust algorithm.) More specifically, let S denote the s... |

1 | Fortran Library 1.1, 2500 Permian Tower, 2500 City West Boulevard - IMSL - 1990 |

1 |
Fortran Library 1.1
- IMSL
- 1990
(Show Context)
Citation Context ...nt matrix to X ′ = XD −1 , where D is the diagonal matrix whose ith diagonal entry is the maximum norm of the ith column of X. We then compute a solution minimizing ‖Dθ‖ using the IMSL routine DL2VRR =-=[8]-=- for finding the minimum norm solution of least squares problems; this routine also handles the numerical rank determination problem via a singular value decomposition. Since (6) contains O(n 2 ) vari... |