Results 1  10
of
120
Recipes for Adjoint Code Construction
"... this paper, is the Adjoint Model Compiler (AMC). ..."
Abstract

Cited by 153 (17 self)
 Add to MetaCart
this paper, is the Adjoint Model Compiler (AMC).
ADIFOR  Generating Derivative Codes from Fortran Programs
, 1992
"... The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f: Rn! Rm. Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of ..."
Abstract

Cited by 149 (55 self)
 Add to MetaCart
The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f: Rn! Rm. Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical solution. ADIFOR (Automatic Differentiation In FORtran) is a source transformation tool that accepts Fortran 77 code for the computation of a function and writes portable Fortran 77 code for the computation of the derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as a source transformation problem. ADIFOR employs the data analysis capabilities of the ParaScope Parallel Programming Environment, which enable us to handle arbitrary Fortran 77 codes and to exploit the computational context in the computation of derivatives. Experimental results show that ADIFOR can handle reallife codes and that ADIFORgenerated codes are competitive with divideddifference approximations of derivatives. In addition, studies suggest that the source transformation approach to automatic differentation may improve the time to compute derivatives by orders of magnitude.
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1994
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations. Key words: QuasiNewton method, constrained optimization, limited memory method, largescale optimization. Abbreviated title: Representation of quasiNewton matrices. 1. Introduction. Limited memory quasiNewton methods are known to be effective techniques for solving certain classes of largescale unconstrained optimization problems (Buckley and Le Nir (1983), Liu and Nocedal (1989), Gilbert and Lemar'echal (1989)) . They make simple approximations of Hessian matrices, which are often good enough to provide a fast rate of linear convergence, and re...
iSAM: Incremental Smoothing and Mapping
, 2008
"... We present incremental smoothing and mapping (iSAM), a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing informatio ..."
Abstract

Cited by 80 (21 self)
 Add to MetaCart
We present incremental smoothing and mapping (iSAM), a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, therefore recalculating only the matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fillin in the factor matrix by periodic variable reordering. Also, to enable data association in realtime, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and realworld datasets for both landmark and poseonly settings.
ADIC: An Extensible Automatic Differentiation Tool for ANSIC
, 1997
"... . In scientific computing, we often require the derivatives @f=@x of a function f expressed as a program with respect to some input parameter(s) x, say. Automatic differentiation (AD) techniques augment the program with derivative computation by applying the chain rule of calculus to elementary oper ..."
Abstract

Cited by 76 (14 self)
 Add to MetaCart
. In scientific computing, we often require the derivatives @f=@x of a function f expressed as a program with respect to some input parameter(s) x, say. Automatic differentiation (AD) techniques augment the program with derivative computation by applying the chain rule of calculus to elementary operations in an automated fashion. This article introduces ADIC (Automatic Differentiation of C), a new AD tool for ANSIC programs. ADIC is currently the only tool for ANSIC that employs a sourcetosource program transformation approach; that is, it takes a C code and produces a new C code that computes the original results as well as the derivatives. We first present ADIC "by example" to illustrate the functionality and ease of use of ADIC and then describe in detail the architecture of ADIC. ADIC incorporates a modular design that provides a foundation for both rapid prototyping of better AD algorithms and their sharing across AD tools for different languages. A component architecture call...
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Computing Large Sparse Jacobian Matrices Using Automatic Differentiation
 SIAM Journal on Scientific Computing
, 1993
"... The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool ..."
Abstract

Cited by 47 (25 self)
 Add to MetaCart
The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool. We compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK2 test problem collection. Our conclusion is that automatic differentiation is the method of choice, leading to results that are as accurate as handcoded derivatives, while at the same time outperforming difference approximations in both accuracy and speed. COMPUTING LARGE SPARSE JACOBIAN MATRICES USING AUTOMATIC DIFFERENTIATION Brett M. Averick , Jorge J. Mor'e, Christian H. Bischof, Alan Carle , and Andreas Griewank 1 Introduction The solution of largescale nonlinear problems often requires the computation of the Jacobian matrix f 0 (x) of a mapping f : IR ...
MetaProgramming with Names and Necessity
, 2002
"... Metaprogramming is a discipline of writing programs in a certain programming language that generate, manipulate or execute programs written in another language. In a typed setting, metaprogramming languages usually contain a modal type constructor to distinguish the level of object programs (which ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
Metaprogramming is a discipline of writing programs in a certain programming language that generate, manipulate or execute programs written in another language. In a typed setting, metaprogramming languages usually contain a modal type constructor to distinguish the level of object programs (which are the manipulated data) from the meta programs (which perform the computations). In functional programming, modal types of object programs generally come in two flavors: open and closed, depending on whether the expressions they classify may contain any free variables or not. Closed object programs can be executed at runtime by the meta program, but the computations over them are more rigid, and typically produce less e#cient residual code. Open object programs provide better inlining and partial evaluation, but once constructed, expressions of open modal type cannot be evaluated.
A Taxonomy of Automatic Differentiation Tools
, 1992
"... Many of the current automatic differentiation (AD) tools have similar characteristics. Unfortunately, it is often the case that the similarities between these various AD tools can not be easily ascertained by reading the corresponding documentation. To clarify this situation, a taxonomy of AD tools ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Many of the current automatic differentiation (AD) tools have similar characteristics. Unfortunately, it is often the case that the similarities between these various AD tools can not be easily ascertained by reading the corresponding documentation. To clarify this situation, a taxonomy of AD tools is presented. The taxonomy places AD tools into the Elemental, Extensional, Integral, Operational and Symbolic classes. This taxonomy is used to classify twentynine AD tools. Each tool is examined individually with respect to the mode of differentiation used and the degree of derivatives computed. A list detailing the availability of the surveyed AD tools is provided in Appendix A.