Results 1  10
of
23
The ADIFOR 2.0 System for the Automatic Differentiation of Fortran 77 Programs
 RICE UNIVERSITY
, 1994
"... Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 s ..."
Abstract

Cited by 57 (16 self)
 Add to MetaCart
Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 system consists of three main components: The ADIFOR 2.0 preprocessor, the ADIntrinsics Fortran 77 exceptionhandling system, and the SparsLinC library. The combination of these tools provides the ability to deal with arbitrary Fortran 77 syntax, to handle codes containing single and doubleprecision real or complexvalued data, to fully support and easily customize the translation of Fortran 77 intrinsics, and to transparently exploit sparsity in derivative computations. ADIFOR 2.0 has been successfully applied to a 60,000line code, which we believe to be a new record in automatic differentiation.
Parallel calculation of sensitivity derivatives for aircraft design using automatic differentiation
 AIAA/NASA/USAF/ISSMO SYMPOSIUM ON MULTIDISCIPLINARY ANALYSIS AND OPTIMIZATION, AIAA 944261
, 1994
"... Sensitivity derivative (SD) calculation via automatic differentiation typical of that required for the aerodynamic design of a transporttype aircraft is considered. Two ways of computing SDs via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicabilit ..."
Abstract

Cited by 28 (17 self)
 Add to MetaCart
Sensitivity derivative (SD) calculation via automatic differentiation typical of that required for the aerodynamic design of a transporttype aircraft is considered. Two ways of computing SDs via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a CRAY YMP computer is compared with a coarsegrained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SDs are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60, with coupling between a wing grid generation program and a stateoftheart, 3D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the CRAY YMP implementation is much faster. As the number of design variables grows, however, the SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarsegrained parallel implementation also can be moved easily to a network of workstations.
Multidisciplinary Design Optimization Techniques: Implications and Opportunities for Fluid Dynamics Research
 JAROSLAW SOBIESZCZANSKISOBIESKI AND RAPHAEL T. HAFTKA ”MULTIDISCIPLINARY AEROSPACE DESIGN OPTIMIZATION: SURVEY OF RECENT DEVELOPMENTS,” 34TH AIAA AEROSPACE SCIENCES MEETING AND EXHIBIT
, 1999
"... A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the de ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
A challenge for the fluid dynamics community is to adapt to and exploit the trend towards greater multidisciplinary focus in research and technology. The past decade has witnessed substantial growth in the research field of Multidisciplinary Design Optimization (MDO). MDO is a methodology for the design of complex engineering systems and subsystems that coherently exploits the synergism of mutually interacting phenomena. As evidenced by the papers, which appear in the biannual AIAA/USAF/NASA/ISSMO Symposia on Multidisciplinary Analysis and Optimization, the MDO technical community focuses on vehicle and system design issues. This paper provides an overview of the MDO technology field from a fluid dynamics perspective, giving emphasis to suggestions of specific applications of recent MDO technologies that can enhance fluid dynamics research itself across the spectrum, from basic flow physics to full configuration aerodynamics.
Derivative Convergence for Iterative Equation Solvers
, 1993
"... this paper, we consider two approaches to computing the desired implicitly defined derivative x ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
this paper, we consider two approaches to computing the desired implicitly defined derivative x
Efficient Derivative Codes Through Automatic Differentiation And Interface Contraction: An Application In Biostatistics
, 1997
"... . Developing code for computing the first and higherorder derivatives of a function by hand can be very timeconsuming and is prone to errors. Automatic differentiation has proven capable of producing derivative codes with very little effort on the part of the user. Automatic differentiation avoid ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
. Developing code for computing the first and higherorder derivatives of a function by hand can be very timeconsuming and is prone to errors. Automatic differentiation has proven capable of producing derivative codes with very little effort on the part of the user. Automatic differentiation avoids the truncation errors characteristic of divided difference approximations. However, the derivative code produced by automatic differentiation can be significantly less efficient than one produced by hand. This shortcoming may be overcome by utilizing insight into the highlevel structure of a computation. This paper focuses on how to take advantage of the fact that the number of variables passed between subroutines frequently is small compared with the number of variables with respect to which one wishes to differentiate. Such an "interface contraction," coupled with the associativity of the chain rule for differentiation, allows one to apply automatic differentiation in a more judicious f...
Efficient Computation of Gradients and Jacobians by Dynamic Exploitation of Sparsity in Automatic Differentiation
, 1996
"... . Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
. Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in this operation are inherently sparse. If the underlying function is a partially separable one (e.g., if its Hessian is sparse), many of the intermediate gradient vectors computed by AD will also be sparse, even though the final gradient is likely to be dense. For large Jacobians computations, every intermediate derivative vector is usually at least as sparse as the least sparse row of the final Jacobian. In this paper, we show that dynamic exploitation of the sparsity inherent in derivative computation can result in dramatic gains in runtime and memory savings. For a set of gradient problems exhibiting implicit sparsity, we report on the runtime and memory requirements of computing the gradi...
Application Of Automatic Differentiation To Groundwater Transport Models
, 1994
"... This paper describes the application of automatic differentiation to obtain codes that evaluate derivatives of complex computer models efficiently, exactly, and with a minimum of human effort. Automatic differentiation is a method that produces a derivative code, given the model code and a list of p ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
This paper describes the application of automatic differentiation to obtain codes that evaluate derivatives of complex computer models efficiently, exactly, and with a minimum of human effort. Automatic differentiation is a method that produces a derivative code, given the model code and a list of parameters that are considered dependent and independent with respect to differentiation. The method produces a code that will evaluate derivatives exactly (up to machine precision), usually in much less time than the approximate finitedifferences method. There are no inherent limits on program size or complexity. We applied the automatic differentiation tool ADIFOR (Automatic Differentiation in Fortran) to a twodimensional and a threedimensional groundwater flow and contaminant transport finiteelement model to demonstrate the method. The CPU times for automatic differentiation were much faster than for the divideddifferences method for both models, and somewhat slower than handwritten optimized analytic derivative code (written by
Applications of Automatic Differentiation in CFD
, 1994
"... Automated multidisciplinary design of aircraft requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity deri ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Automated multidisciplinary design of aircraft requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives for the individual discipline simulation codes. Typical advanced CFD codes do not provide such derivatives as part of a flow solution. These derivatives are expensive to obtain by divided differences from perturbed solutions, and may be unreliable, particularly for noisy functions. In this paper, automatic differentiation has been investigated as a means of extending iterative CFD codes with sensitivity derivatives. In particular, the ADIFOR automatic differentiator has been applied to the 3D, thinlayer NavierStokes, multigrid flow solver called TLNS3D coupled with the WTCO wing grid generator. Results of a sequence of efforts in which TLNS3D has been successfully augmented to ...
Data Assimilation for Numerical Weather Prediction: A Review
"... Abstract During the last 20 years data assimilation has gradually reached a mature center stage position at both Numerical Weather Prediction centers as well as being at the center of activities at many federal research institutes as well as at many universities. The research encompasses now activit ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract During the last 20 years data assimilation has gradually reached a mature center stage position at both Numerical Weather Prediction centers as well as being at the center of activities at many federal research institutes as well as at many universities. The research encompasses now activities which involve, beside meteorologists and oceanographers at operational centers or federal research facilities, many in the applied and computational mathematical research communities. Data assimilation or 4D VAR extends now also to other geosciences fields such as hydrology and geology and results in the publication of an ever increasing number of books and monographs related to the topic. In this short survey article we provide a brief introduction providing some historical perspective and background, a survey of data assimilation prior to 4D VAR and basic concepts of data assimilation. I first proceed to outline the early 4D VAR stages (1980–1990) and addresses in a succinct manner the period of the 1990s that saw the major developments and the flourishing of all aspects of 4D VAR both at operational centers and at research Universities and Federal Laboratories. Computational aspects of 4D Var data assimilation addressing computational burdens as well as ways to alleviate them are briefly outlined. Brief interludes are provided for each period surveyed allowing the reader to have a better perspective A brief survey of different topics related to state of the art 4D Var today is then presented and we conclude with what we perceive to be main directions of research and the future of data assimilation and some open problems. We will strive to use the unified notation of Ide et al. (J Meteor Soc Japan 75:181–189,
Efficient Computation of Gradients and Jacobians by Transparent Exploitation of Sparsity in Automatic Differentiation
 OPTIMIZATION METHODS AND SOFTWARE
, 1996
"... Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in th ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Automatic differentiation (AD) is a technique that augments computer codes with statements for the computation of derivatives. The computational workhorse of ADgenerated codes for firstorder derivatives is the linear combination of vectors. For many largescale problems, the vectors involved in this operation are inherently sparse. If the underlying function is a partially separable one (e.g., if its Hessian is sparse), many of the intermediate gradient vectors computed by AD will also be sparse, even though the final gradient is likely to be dense. For large Jacobians computations, every intermediate derivative vector is usually at least as sparse as the least sparse row of the final Jacobian. In this paper, we show that transparent exploitation of the sparsity inherent in derivative computation can result in dramatic gains in runtime and memory savings. For a set of gradient problems exhibiting implicit sparsity, we report on the runtime and memory requirements of computing the g...