Results 1  10
of
14
The ADIFOR 2.0 System for the Automatic Differentiation of Fortran 77 Programs
 RICE UNIVERSITY
, 1994
"... Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 s ..."
Abstract

Cited by 55 (17 self)
 Add to MetaCart
Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 system consists of three main components: The ADIFOR 2.0 preprocessor, the ADIntrinsics Fortran 77 exceptionhandling system, and the SparsLinC library. The combination of these tools provides the ability to deal with arbitrary Fortran 77 syntax, to handle codes containing single and doubleprecision real or complexvalued data, to fully support and easily customize the translation of Fortran 77 intrinsics, and to transparently exploit sparsity in derivative computations. ADIFOR 2.0 has been successfully applied to a 60,000line code, which we believe to be a new record in automatic differentiation.
A Fortran 90 Environment for Research and Prototyping of Enclosure Algorithms for Nonlinear Equations and Global Optimization
"... An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should b ..."
Abstract

Cited by 40 (19 self)
 Add to MetaCart
An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should be portable, easy to learn, use, and maintain, and sufficiently fast for some production work. The motivation, design principles, uses, and capabilities for this environment are outlined. The environment includes an interval data type, a symbolic form of automatic differentiation to obtain an internal representation for functions, a special technique to allow conditional branches with operator overloading and interval computations, and generic routines to give interval and noninterval function and derivative information. Some of these generic routines use a special version of the backward mode of automatic differentiation. The package also includes dynamic data structures for exhaustive sear...
Using ADIFOR to Compute Dense and Sparse Jacobians
 TECHNICAL MEMORANDUM. ANL/MCSTM158, MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY
, 1992
"... ADIFOR is a source translator that, given a collection of Fortran subroutines for the computation of a "function," produces Fortran code for the computation of the derivatives of this function. More specifically, ADIFOR produces code to compute the matrixmatrix product JS, where J is the Jacobian o ..."
Abstract

Cited by 19 (18 self)
 Add to MetaCart
ADIFOR is a source translator that, given a collection of Fortran subroutines for the computation of a "function," produces Fortran code for the computation of the derivatives of this function. More specifically, ADIFOR produces code to compute the matrixmatrix product JS, where J is the Jacobian of the "function" with respect to the userdefined independent variables, and S is the composition of the derivative objects corresponding to the independent variables. This interface is flexible; by setting S = x, one can compute the matrixvector product Jx, or by setting S = I, one can compute the whole Jacobian J . Other initializations of S allow one to exploit a known sparsity structure of J. This paper illustrates the proper initialization of ADIFORgenerated derivative codes and the exploitation of a known sparsity structure of J.
Structured Second and HigherOrder Derivatives through Univariate Taylor Series
"... Second and higherorder derivatives are required by applications in scientific computation, especially for optimization algorithms. The two complementary concepts of interpolating partial derivatives from univariate Taylor series and preaccumulating of "local" derivatives form the mathematical foun ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
Second and higherorder derivatives are required by applications in scientific computation, especially for optimization algorithms. The two complementary concepts of interpolating partial derivatives from univariate Taylor series and preaccumulating of "local" derivatives form the mathematical foundations for accurate, efficient computation of second and higherorder partial derivatives for large codes. We compute derivatives in a fashion that parallelizes well, exploits sparsity or other structure frequently found in Hessian matrices, can compute only selected elements of a Hessian matrix, and computes Hessian x vector products.
Automatic Differentiation as a Tool in Engineering Design
 4th AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization
, 1992
"... •_, z' _ t/,j ul M U ..."
Automatic Differentiation, Tangent Linear Models, and (Pseudo)Adjoints
 PREPRINT MCSP4721094, MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY
, 1994
"... This paper provides a brief introduction to automatic differentiation and relates it to the tangent linear model and adjoint approaches commonly used in meteorology. After a brief review of the forward and reverse mode of automatic differentiation, the ADIFOR automatic differentiation tool is int ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This paper provides a brief introduction to automatic differentiation and relates it to the tangent linear model and adjoint approaches commonly used in meteorology. After a brief review of the forward and reverse mode of automatic differentiation, the ADIFOR automatic differentiation tool is introduced, and initial results of a sensitivityenhanced version of the MM5 PSU/NCAR mesoscale weather model are presented. We also present a novel approach to the computation of gradients that uses a reverse mode approach at the time loop level and a forward mode approach at every time step. The resulting "pseudoadjoint" shares the characteristic of an adjoint code that the ratio of gradient to function evaluation does not depend on the number of independent variables. In contrast to a true adjoint approach, however, the nonlinearity of the model plays no role in the complexity of the derivative code.
Efficient Computation of Gradients and Jacobians by Dynamic Exploitation of Sparsity in Automatic Differentiation
, 1996
"... . Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
. Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in this operation are inherently sparse. If the underlying function is a partially separable one (e.g., if its Hessian is sparse), many of the intermediate gradient vectors computed by AD will also be sparse, even though the final gradient is likely to be dense. For large Jacobians computations, every intermediate derivative vector is usually at least as sparse as the least sparse row of the final Jacobian. In this paper, we show that dynamic exploitation of the sparsity inherent in derivative computation can result in dramatic gains in runtime and memory savings. For a set of gradient problems exhibiting implicit sparsity, we report on the runtime and memory requirements of computing the gradi...
Operator Overloading as an Enabling Technology for Automatic Differentiation
, 1993
"... We present an example of the science that is enabled by objectoriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We present an example of the science that is enabled by objectoriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or sensitivity analysis. Automatic differentiation computes derivatives accurately and efficiently by applying the chain rule to each arithmetic operation or elementary function. Operator overloading enables the techniques of either the forward or the reverse mode of automatic differentiation to be applied to realworld scientific problems. We illustrate automatic differentiation with an example drawn from a model of unsaturated flow in a porous medium. The problem arises from planning for the longterm storage of radioactive waste. 1 Introduction Scientific computation often needs derivatives for solving nonlinear partial differential equations. One such problem currently under investigation...
Efficient Computation of Gradients and Jacobians by Transparent Exploitation of Sparsity in Automatic Differentiation
 Optimization Methods and Software
, 1996
"... . Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
. Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in this operation are inherently sparse. If the underlying function is a partially separable one (e.g., if its Hessian is sparse), many of the intermediate gradient vectors computed by AD will also be sparse, even though the final gradient is likely to be dense. For large Jacobians computations, every intermediate derivative vector is usually at least as sparse as the least sparse row of the final Jacobian. In this paper, we show that transparent exploitation of the sparsity inherent in derivative computation can result in dramatic gains in runtime and memory savings. For a set of gradient problems exhibiting implicit sparsity, we report on the runtime and memory requirements of computing the g...
Automatic Differentiation For PDEs  Unsaturated Flow Case Study
, 1992
"... Introduction The techniques of automatic differentiation [8, 10, 15] are applied to an example partial differential equation arising from the modeling of unsaturated flow. One common paradigm for the numerical solution to some classes of 2, 3, or higherdimensional partial differential equations i ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Introduction The techniques of automatic differentiation [8, 10, 15] are applied to an example partial differential equation arising from the modeling of unsaturated flow. One common paradigm for the numerical solution to some classes of 2, 3, or higherdimensional partial differential equations is: 1. Given a PDE and boundary conditions, 2. apply finite difference or finite element approximations on some appropriate (frequently nonuniform) grid, and 3. enforce an approximate solution by solving a nonlinear system F (u) = 0 for the residual by Newton's method. The dimension of the nonlinear system F (u) = 0 is proportional to the number of grid points. In current algorithms, the Jacobian J required by Newton's method is computed by some combination of hand coding, divided differences, matrix coloring, and partial separability. We present a case study documenting the steps we took in analyzing a co