Results 1  10
of
36
The NEWUOA software for unconstrained optimization with derivatives
, 2004
"... Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the variables. When Q is revised, the new Q interpolates F at m points, the value m=2n+1 being recommended. The remaining freedom in the new Q is taken up by minimizing the Frobenius norm of the change to ∇ 2 Q. Only one interpolation point is altered on each iteration. Thus, except for occasional origin shifts, the amount of work per iteration is only of order (m+n) 2, which allows n to be quite large. Many questions were addressed during the development of NEWUOA, for the achievement of good accuracy and robustness. They include the choice of the initial quadratic model, the need to maintain enough linear independence in the interpolation conditions in the presence of computer rounding errors, and the stability of the updating of certain matrices that allow the fast revision of Q. Details are given of the techniques that answer all the questions that occurred. The software was tried on several test problems. Numerical results for nine of them are reported and discussed, in order to demonstrate the performance of the software for up to 160 variables.
Benchmarking derivativefree optimization algorithms
"... We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performa ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewisesmooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the modelbased solver tested performs better than the two direct search solvers tested, even for noisy and piecewisesmooth problems. 1
Image Guided Personalization of ReactionDiffusion Type Tumor Growth Models Using Modified Anisotropic Eikonal Equations
"... Reactiondiffusion based tumor growth models have been widely used in the literature for modeling the growth of brain gliomas. Lately, recent models have started integrating medical images in their formulation. Including different tissue types, geometry of the brain and the directions of white matte ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Reactiondiffusion based tumor growth models have been widely used in the literature for modeling the growth of brain gliomas. Lately, recent models have started integrating medical images in their formulation. Including different tissue types, geometry of the brain and the directions of white matter fiber tracts improved the spatial accuracy of reactiondiffusion models. The adaptation of the general model to the specific patient cases on the other hand has not been studied thoroughly yet. In this work we address this adaptation. We propose a parameter estimation method for reactiondiffusion tumor growth models using time series of medical images. This method estimates the patient specific parameters of the model using the images of the patient taken at successive time instances. The proposed method formulates the evolution of the tumor delineation visible in the images based on the reactiondiffusion dynamics therefore it remains consistent with the information available. We perform thorough analysis of the method using synthetic tumors and show important couplings between parameters of the reactiondiffusion model. We show that several parameters can be uniquely identified in the case of fixing one parameter, namely the proliferation rate of tumor cells. Moreover, regardless of the value the proliferation rate is fixed to, the speed of growth of the tumor can be estimated in terms of the model parameters with accuracy. We also show that using the modelbased speed we can simulate the evolution of the tumor for the specific patient case. Finally we apply our method to 2 real cases and show promising preliminary results.
a derivativefree algorithm based on radial basis functions
 International Journal of Modelling and Simulation
, 2008
"... Derivativefree optimization involves the methods used to minimize an expensive objective function when its derivatives are not available. We present here a trustregion algorithm based on Radial Basis Functions (RBFs). The main originality of our approach is the use of RBFs to build the trustregio ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Derivativefree optimization involves the methods used to minimize an expensive objective function when its derivatives are not available. We present here a trustregion algorithm based on Radial Basis Functions (RBFs). The main originality of our approach is the use of RBFs to build the trustregion models and our management of the interpolation points based on Newton fundamental polynomials. Moreover the complexity of our method is very attractive. We have tested the algorithm against the best stateoftheart methods (UOBYQA, NEWUOA, DFO). The tests on the problems from the CUTEr collection show that BOOSTERS is performing very well on mediumsize problems. Moreover, it is able to solve problems of dimension 200, which is considered very large in derivativefree optimization.
NONCONVEX SEMILINEAR PROBLEMS AND CANONICAL DUALITY SOLUTIONS
"... This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semilinear partial differential equatio ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semilinear partial differential equations in mathematical physics including phase transitions, postbuckling of large deformed beam model, chaotic dynamics, nonlinear field theory, and superconductivity. Numerical discretizations of these equations lead to a class of very difficult global minimization problems in finite dimensional space. It is shown that by the use of the canonical dual transformation, these nonconvex constrained primal problems can be converted into certain very simple canonical dual problems. The criticality condition leads to dual algebraic equations which can be solved completely. Therefore, a complete set of solutions to these very difficult primal problems can be obtained. The extremality of these solutions are controlled by the socalled triality theory. Several examples are illustrated including the nonconvex constrained quadratic programming. Results show that these very difficult primal problems can be converted into certain simple canonical (either convex or concave) dual problems, which can be solved completely. Also some very interesting new phenomena, i.e. triochaos and metachaos, are discovered in postbuckling of nonconvex systems. The author believes that these important phenomena exist in many nonconvex dynamical systems and deserve to have a detailed study.
Variablenumber samplepath optimization
"... The samplepath method is one of the most important tools in simulationbased optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The samplepath method is one of the most important tools in simulationbased optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s UOBYQA (Unconstrained Optimization BY Quadratic Approximation) method, which integrates a Bayesian VariableNumber SamplePath (VNSP) scheme to choose appropriate number of samples at each iteration. The statistically accurate scheme determines the number of simulation runs, and guarantees the global convergence of the algorithm. The VNSP scheme saves a significant amount of simulation operations compared to general purpose ‘fixednumber' samplepath methods. We present numerical results based on the new algorithm.
Adaptation of the UOBYQA algorithm for noisy functions
 In Proceedings of the 2006 Winter Simulation Conference
"... In many realworld optimization problems, the objective function may come from a simulation evaluation so that it is (a) subject to various levels of noise, (b) not differentiable, and (c) computationally hard to evaluate. In this paper, we modify Powell’s UOBYQA algorithm to handle those realworld ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In many realworld optimization problems, the objective function may come from a simulation evaluation so that it is (a) subject to various levels of noise, (b) not differentiable, and (c) computationally hard to evaluate. In this paper, we modify Powell’s UOBYQA algorithm to handle those realworld simulation problems. Our modifications apply Bayesian techniques to guide appropriate sampling strategies to estimate the objective function. We aim to make the underlying UOBYQA algorithm proceed efficiently while simultaneously controlling the amount of computational effort. 1
MNH: A DerivativeFree Optimization Algorithm Using Minimal Norm Hessians
, 2008
"... We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolat ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We introduce MNH, a new algorithm for unconstrained optimization when derivatives are unavailable, primarily targeting applications that require running computationally expensive deterministic simulations. MNH relies on a trustregion framework with an underdetermined quadratic model that interpolates the function at a set of data points. We show how to construct this interpolation set to yield computationally stable parameters for the model and, in doing so, obtain an algorithm which converges to firstorder critical points. Preliminary results are encouraging and show that MNH makes effective use of the points evaluated in the course of the optimization. 1
ORBIT: Optimization by radial basis function interpolation in trustregions
 SIAM Journal on Scientific Computing
, 2008
"... Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points are sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDEbased bioremediation plan, are also very encouraging and support ORBIT’s effectiveness on blackbox functions for which no special mathematical structure is known or available.