Results 1  10
of
18
Convergence analysis of pseudotransient continuation
 SIAM J. Num. Anal
, 1998
"... Abstract. Pseudotransient continuation (Ψtc) is a wellknown and physically motivated technique for computation of steady state solutions of timedependent partial differential equations. Standard globalization strategies such as line search or trust region methods often stagnate at local minima. Ψ ..."
Abstract

Cited by 61 (25 self)
 Add to MetaCart
Abstract. Pseudotransient continuation (Ψtc) is a wellknown and physically motivated technique for computation of steady state solutions of timedependent partial differential equations. Standard globalization strategies such as line search or trust region methods often stagnate at local minima. Ψtc succeeds in many of these cases by taking advantage of the underlying PDE structure of the problem. Though widely employed, the convergence of Ψtc is rarely discussed. In this paper we prove convergence for a generic form of Ψtc and illustrate it with two practical strategies.
An Implicit Filtering Algorithm For Optimization Of Functions With Many Local Minima
 SIAM J. Optim
, 1995
"... . In this paper we describe and analyze an algorithm for certain box constrained optimization problems that may have several local minima. A paradigm for these problems is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low a ..."
Abstract

Cited by 53 (16 self)
 Add to MetaCart
. In this paper we describe and analyze an algorithm for certain box constrained optimization problems that may have several local minima. A paradigm for these problems is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low amplitude terms which cause local minima away from the global minimum of the simple function. Our method is gradient based and therefore the performance can be improved by use of quasiNewton methods. Key words. filtering, projected gradient algorithm, quasiNewton method AMS(MOS) subject classifications. 65H10, 65K05, 65K10 1. Introduction. In this paper we describe and analyze an algorithm for bound constrained optimization problems that may have several local minima. The type of problem we have in mind is one in which the function to be minimized is the sum of a simple function, such as a convex quadratic, and high frequency, low amplitude terms which cause the local minima. Of particul...
Detection And Remediation Of Stagnation In The NelderMead Algorithm Using A Sufficient Decrease Condition
 SIAM J. OPTIM
, 1997
"... The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
The NelderMead algorithm can stagnate and converge to a nonoptimal point, even for very simple problems. In this note we propose a test for sufficient decrease which, if passed for the entire iteration, will guarantee convergence of the NelderMead iteration to a stationary point if the objective function is smooth. Failure of this condition is an indicator of potential stagnation. As a remedy we propose a new step, which we call an oriented restart, which reinitializes the simplex to a smaller one with orthogonal edges which contains an approximate steepest descent step from the current best point. We also give results that apply when objective function is a lowamplitude perturbation of a smooth function. We illustrate our results with some numerical examples.
Superlinear Convergence And Implicit Filtering
, 1999
"... . In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefr ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
. In this note we show how the implicit filtering algorithm can be coupled with the BFGS quasiNewton update to obtain a superlinearly convergent iteration if the noise in the objective function decays sufficiently rapidly as the optimal point is approached. We show how known theory for the noisefree case can be extended and thereby provide a partial explanation for the good performance of quasiNewton methods when coupled with implicit filtering. Key words. noisy optimization, implicit filtering, BFGS algorithm, superlinear convergence AMS subject classifications. 65K05, 65K10, 90C30 1. Introduction. In this paper we examine the local and global convergence behavior of the combination of the BFGS [4], [20], [17], [23] quasiNewton method with the implicit filtering algorithm. The resulting method is intended to minimize smooth functions that are perturbed with lowamplitude noise. Our results, which extend those of [5], [15], and [6], show that if the amplitude of the noise decays ...
The Simplex Gradient and Noisy Optimization Problems
 in Computational Methods in Optimal Design and Control
, 1998
"... this paper we consider objective functions that are perturbations of simple, smooth functions. The surface in on the left in Figure 1, taken from [24], and the graph on the right illustrate this type of problem. Figure 1: Optimization Landscapes 0 5 10 15 20 25 0 5 10 15 20 25 80 60 40 20 0 20 0 ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
this paper we consider objective functions that are perturbations of simple, smooth functions. The surface in on the left in Figure 1, taken from [24], and the graph on the right illustrate this type of problem. Figure 1: Optimization Landscapes 0 5 10 15 20 25 0 5 10 15 20 25 80 60 40 20 0 20 0.5 1.5 2.5 3.5 4.5 21.51 0.5 0.5 1 1.5 2 The perturbations may be results of discontinuities or nonsmoth effects in the underlying models, randomness in the function evaluation, or experimental or measurement errors. Conventional gradientbased methods will be trapped in local minima even if the noise is smooth. Many classes of methods for noisy optimization problems are based on function information computed on sequences of simplices. The NelderMead, [18], multidirectional search, [8], [21], and implicit filtering, [12], methods are three examples. The performance of such methods can be explained in terms of the difference approximation of the gradient that is implicit in the function evaluations they perform.
An Interface Between Optimization and Application for the Numerical Solution of Optimal Control Problems
 ACM Transactions on Mathematical Software
, 1998
"... This paper is concerned with the implementation of optimization algorithms for the solution of smooth discretized optimal control problems. The problems under consideration can be written as min f(y; u) ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
This paper is concerned with the implementation of optimization algorithms for the solution of smooth discretized optimal control problems. The problems under consideration can be written as min f(y; u)
Shape Optimization in Steady Blood Flow: A Numerical Study of NonNewtonian Effects
 Computer Methods in Biomechanics and Biomedical Engineering, Vol.8
, 2005
"... We investigate the influence of the fluid constitutive model on the outcome of shape optimization tasks, motivated by optimal design problems in biomedical engineering. Our computations are based on the NavierStokes equations generalized to nonNewtonian fluid, with the modified Cross model employe ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We investigate the influence of the fluid constitutive model on the outcome of shape optimization tasks, motivated by optimal design problems in biomedical engineering. Our computations are based on the NavierStokes equations generalized to nonNewtonian fluid, with the modified Cross model employed to account for the shearthinning behavior of blood. The generalized Newtonian treatment exhibits striking differences in the velocity field for smaller shear rates. We apply sensitivitybased optimization procedure to a flow through an idealized arterial graft. For this problem we study the influence of the inflow velocity, and thus the shear rate. Furthermore, we introduce an additional factor in the form of a geometric parameter, and study its effect on the optimal shape obtained.
Optimal Control of Unsteady Compressible Viscous Flows
 Inter. J. Num. Meth. Fluids
, 2002
"... This paper presents the formulation and numerical solution of a class of optimal boundary control problems governed by the unsteady twodimensional compressible Navier Stokes equations. Fundamental issues including the choice of the control space and the associated regularization term in the objec ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper presents the formulation and numerical solution of a class of optimal boundary control problems governed by the unsteady twodimensional compressible Navier Stokes equations. Fundamental issues including the choice of the control space and the associated regularization term in the objective function, as well as issues in the gradient computation via the adjoint equation method are discussed. Numerical results are presented for a model problem consisting of two counterrotating viscous vortices above an infinite wall which, due to the selfinduced velocity field, propagate downward and interact with the wall. The wall boundary control is the temporal and spatial distribution of wallnormal velocity. Optimal controls for objective functions that target kinetic energy, heat transfer, and wall shear stress are presented along with the influence of control regularization for each case
Numerical Studies of Shape Optimization Problems in Elasticity using . . .
, 2001
"... this paper, the knowledge of its normal derivative suffices to evaluate the data appearing from the torsional rigidity. Invoking a Newton potential, the normal derivative can be represented by a DirichlettoNeumann map based on boundary integral operators, namely the single layer operator and the d ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
this paper, the knowledge of its normal derivative suffices to evaluate the data appearing from the torsional rigidity. Invoking a Newton potential, the normal derivative can be represented by a DirichlettoNeumann map based on boundary integral operators, namely the single layer operator and the double layer operator. The application of boundary elements for the discretization requires only a partition of the boundary. Therefore, we do not need a triangulation of the domain like for finite elements. In general, boundary element methods suffer from a major disadvantage. The corresponding system matrices are densely populated. Therefore, the complexity for solving such equations grows at least quadratic with the number of equations. This fact restricts the maximal size of the linear equations seriously. Modern methods for the fast solution of BEM reduce the complexity to a suboptimal rate or even an optimal rate, that is a linear rate. Prominent examples for such methods are the fast multipole method by Greengard and Rokhlin [18] and the panel clustering by Hackbusch and Novack [20]. Observed first by Beylkin, Coifman and Rokhlin [4], the wavelet Galerkin scheme offers another tool for the fast solution of integral equations. In fact, a Galerkin discretization based on wavelet bases results in numerically sparse matrices, i.e., many matrix entries are negligible and can be treated as zero. Discarding these nonrelevant matrix entries is called matrix compression. In accordance with Dahmen et al. [7, 10, 9, 28], this can be performed without compromising the accuracy of the underlying Galerkin scheme. As shown by Dahmen, Harbrecht and Schneider in [7, 23, 28], the wavelet Gelerkin scheme has an optimal overall complexity. The paper is organized as follows. Section 1 is...
Termination Of Newton/Chord Iterations And The Method Of Lines
 North Carolina State University, Center for
, 1997
"... . Many ordinary differential equation and differential algebraic equation codes terminate the nonlinear iteration for the corrector equation when the difference between successive iterates (the step) is sufficiently small. This termination criterion avoids the expense of evaluating the nonlinear res ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
. Many ordinary differential equation and differential algebraic equation codes terminate the nonlinear iteration for the corrector equation when the difference between successive iterates (the step) is sufficiently small. This termination criterion avoids the expense of evaluating the nonlinear residual at the final iterate. Similarly, Jacobian information is not usually computed at every time step, but only when certain tests indicate that the cost of a new Jacobian is justified by the improved performance in the nonlinear iteration. In this paper, we show how an outofdate Jacobian coupled with moderate illconditioning can lead to premature termination of the corrector iteration and suggest ways in which this situation can be detected and remedied. As an example, we consider the method of lines solution of Richards' equation, which models flow through variablysaturated porous media. When the solution to this problem has a sharp moving front, and the Jacobian is even slightly ill...