Results 1  10
of
96
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 222 (15 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Jacobianfree NewtonKrylov methods: a survey of approaches and applications
 J. Comput. Phys
"... Jacobianfree NewtonKrylov (JFNK) methods are synergistic combinations of Newtontype methods for superlinearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobianvector product, which ..."
Abstract

Cited by 194 (6 self)
 Add to MetaCart
Jacobianfree NewtonKrylov (JFNK) methods are synergistic combinations of Newtontype methods for superlinearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobianvector product, which may be probed approximately without forming and storing the elements of the true Jacobian, through a variety of means. Various approximations to the Jacobian matrix may still be required for preconditioning the resulting Krylov iteration. As with Krylov methods for linear problems, successful application of the JFNK method to any given problem is dependent on adequate preconditioning. JFNK has potential for application throughout problems governed by nonlinear partial dierential equations and integrodierential equations. In this survey article we place JFNK in context with other nonlinear solution algorithms for both boundary value problems (BVPs) and initial value problems (IVPs). We provide an overview of the mechanics of JFNK and attempt to illustrate the wide variety of preconditioning options available. It is emphasized that JFNK can be wrapped (as an accelerator) around another nonlinear xed point method (interpreted as a preconditioning process, potentially with signicant code reuse). The aim of this article is not to trace fully the evolution of JFNK, nor to provide proofs of accuracy or optimal convergence for all of the constituent methods, but rather to present the reader with a perspective on how JFNK may be applicable to problems of physical interest and to provide sources of further practical information. A review paper solicited by the EditorinChief of the Journal of Computational
Tetrahedral Mesh Improvement Using Swapping and Smoothing
 INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING
, 1997
"... Automatic mesh generation and adaptive refinement methods for complex threedimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
Automatic mesh generation and adaptive refinement methods for complex threedimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less accurate and more difficult to compute. Fortunately, the shape of the elements can be improved through several mechanisms, including face and edgeswapping techniques, which change local connectivity, and optimizationbased mesh smoothing methods, which adjust mesh point location. We consider several criteria for each of these two methods and compare the quality of several meshes obtained by using different combinations of swapping and smoothing. Computational experiments show that swapping is critical to the improvement of general mesh quality and that optimizationbased smoothing is highly effective in eliminating very small and very large angles. Highquality meshes are obtained in a computationally efficient manner by using optimizationbased smoothing to improve only the worst elements and a smart variant of Laplacian smoothing on the remaining elements. Based on our experiments, we offer several recommendations for the improvement of tetrahedral meshes.
Automated Test Data Generation Using An Iterative Relaxation Method
 In SIGSOFT ’98/FSE6: Proceedings of the 6th ACM SIGSOFT international symposium on Foundations of software engineering
, 1998
"... An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data gener ..."
Abstract

Cited by 97 (6 self)
 Add to MetaCart
(Show Context)
An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data generation is initiated with an arbitrarily chosen input from a given domain. This input is then iteratively refined to obtain an input on which all the branch predicates on the given path evaluate to the desired outcome. In each iteration the program statements relevant to the evaluation of each branch predicate on the path are executed, and a set of linear constraints is derived. The constraints are then solved to obtain the increments for the input. These increments are added to the current input to obtain the input for the next iteration. The relaxation technique used in deriving the constraints provides feedback on the amount by which each input variable should be adjusted for the branches o...
The complexstep derivative approximation
 ACM Transactions on Mathematical Software
"... The complexstep derivative approximation and its application to numerical algorithms are presented. Improvements to the basic method are suggested that further increase its accuracy and robustness and unveil the connection to algorithmic differentiation theory. A general procedure for the implement ..."
Abstract

Cited by 68 (15 self)
 Add to MetaCart
The complexstep derivative approximation and its application to numerical algorithms are presented. Improvements to the basic method are suggested that further increase its accuracy and robustness and unveil the connection to algorithmic differentiation theory. A general procedure for the implementation of the complexstep method is described in detail and a script is developed that automates its implementation. Automatic implementations of the complexstep method for Fortran and C/C++ are presented and compared to existing algorithmic differentiation tools. The complexstep method is tested in two large multidisciplinary solvers and the resulting sensitivities are compared to results given by finite differences. The resulting sensitivities are shown to be as accurate as the analyses. Accuracy, robustness, ease of implementation and maintainability make these complexstep derivative approximation tools very attractive options for sensitivity analysis.
What color is your Jacobian? Graph coloring for computing derivatives
 SIAM REV
, 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
(Show Context)
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns corresponds to distance2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
NEOS and CONDOR: Solving Optimization Problems over the Internet
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1998
"... We discuss the use of Condor, a distributed resource management system, as a provider of computational resources for NEOS, an environment for solving optimization problems over the Internet. We also describe how problems are submitted and processed by NEOS, and then scheduled and solved by Condor ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
We discuss the use of Condor, a distributed resource management system, as a provider of computational resources for NEOS, an environment for solving optimization problems over the Internet. We also describe how problems are submitted and processed by NEOS, and then scheduled and solved by Condor on available (idle) workstations.
Pseudotransient continuation and differentialalgebraic equations
 SIAM J. Sci. Comp
, 2003
"... Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value prob ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Pseudotransient continuation is a practical technique for globalizing the computation of steadystate solutions of nonlinear differential equations. The technique employs adaptive timestepping to integrate an initial value problem derived from an underlying ODE or PDE boundary value problem until sufficient accuracy in the desired steadystate root is achieved to switch over to Newton’s method and gain a rapid asymptotic convergence. The existing theory for pseudotransient continuation includes a global convergence result for differential equations written in semidiscretized methodoflines form. However, many problems are better formulated or can only sensibly be formulated as differentialalgebraic equations (DAEs). These include systems in which some of the equations represent algebraic constraints, perhaps arising from the spatial discretization of a PDE constraint. Multirate systems, in particular, are often formulated as differentialalgebraic systems to suppress fast time scales (acoustics, gravity waves, Alfven waves, near equilibrium chemical oscillations, etc.) that are irrelevant on the dynamical time scales of interest. In this paper we present a global convergence result for pseudotransient continuation applied to DAEs of index 1, and we illustrate it with numerical experiments on model incompressible flow and reacting flow problems, in which a constraint is employed to step over acoustic waves.
A globally convergent linearly constrained Lagrangian method for nonlinear optimization
 SIAM J. Optim
, 2002
"... Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be relia ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the wellknown software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.
Multidisciplinary Design Optimization Techniques: Implications and Opportunities for Fluid Dynamics Research
 JAROSLAW SOBIESZCZANSKISOBIESKI AND RAPHAEL T. HAFTKA ”MULTIDISCIPLINARY AEROSPACE DESIGN OPTIMIZATION: SURVEY OF RECENT DEVELOPMENTS,” 34TH AIAA AEROSPACE SCIENCES MEETING AND EXHIBIT
, 1999
"... A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the de ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the design of complex engineering systems and subsystems that coherently exploits the synergism of mutually interacting phenomena. As evidenced by the papers, which appear in the biannual AIAA/USAF/NASA/ISSMO Symposia on Multidisciplinary Analysis and Optimization, the MDO technical community focuses on vehicle and system design issues. This paper provides an overview of the MDO technology field from a fluid dynamics perspective, giving emphasis to suggestions of specific applications of recent MDO technologies that can enhance fluid dynamics research itself across the spectrum, from basic flow physics to full configuration aerodynamics.