Results 1  10
of
25
OpenAD/F: A Modular, OpenSource Tool for Automatic Differentiation of Fortran Codes
"... The OpenAD/F tool allows the evaluation of derivatives of functions defined by a Fortran program. The derivative evaluation is performed by a Fortran code resulting from the analysis and transformation of the original program that defines the function of interest. OpenAD/F has been designed with a ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
(Show Context)
The OpenAD/F tool allows the evaluation of derivatives of functions defined by a Fortran program. The derivative evaluation is performed by a Fortran code resulting from the analysis and transformation of the original program that defines the function of interest. OpenAD/F has been designed with a particular emphasis on modularity, flexibility, and the use of open source components. While the code transformation follows the basic principles of automatic differentiation, the tool implements new algorithmic approaches at various levels, for example, for basic block preaccumulation and call graph reversal. Unlike most other automatic differentiation tools, OpenAD/F uses components provided by the OpenAD framework, which supports a comparatively easy extension of the code transformations in a languageindependent fashion. It uses code analysis results implemented in the OpenAnalysis component. The interface to the languageindependent transformation engine is an XMLbased format, specified through an XML schema. The implemented transformation algorithms allow efficient derivative computations using locally optimized crosscountry sequences
A differentiationenabled Fortran 95 compiler
 CODEN ACMSCU. ISSN 00983500 (print), 15577295 (electronic). 215 Tang:2005:DNI
, 2005
"... ..."
The DataFlow Equations of Checkpointing in Reverse Automatic Differentiation
 COMPUTATIONAL SCIENCE – ICCS 2006
, 2006
"... Checkpointing is a technique to reduce the memory consumption of adjoint programs produced by reverse Automatic Differentiation. However, checkpointing also uses a nonnegligible memory space for the socalled “snapshots”. We analyze the dataflow of checkpointing, yielding a precise characterizatio ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Checkpointing is a technique to reduce the memory consumption of adjoint programs produced by reverse Automatic Differentiation. However, checkpointing also uses a nonnegligible memory space for the socalled “snapshots”. We analyze the dataflow of checkpointing, yielding a precise characterization of all possible memoryoptimal options for snapshots. This characterization is formally derived from the structure of checkpoints and from classical dataflow equations. In particular, we select two very different options and study their behavior on a number of real codes. Although no option is uniformly better, the socalled “lazysnapshot” option appears preferable in general.
Automatic Differentiation for GPUAccelerated 2D/3D Registration
"... Summary. A common task in medical image analysis is the alignment of data from different sources, e.g., Xray images and computed tomography (CT) data. Such a task is generally known as registration. We demonstrate the applicability of automatic differentiation (AD) techniques to a class of 2D/3D re ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Summary. A common task in medical image analysis is the alignment of data from different sources, e.g., Xray images and computed tomography (CT) data. Such a task is generally known as registration. We demonstrate the applicability of automatic differentiation (AD) techniques to a class of 2D/3D registration problems which are highly computationally intensive and can therefore greatly benefit from a parallel implementation on recent graphics processing units (GPUs). However, being designed for graphics applications, GPUs have some restrictions which conflict with requirements for reverse mode AD, in particular for taping and TBR analysis. We discuss design and implementation issues in the presence of such restrictions on the target platform and present a method which can register a CT volume data set (512 × 512 × 288 voxels) with three Xray images (512 × 512 pixels each) in 20 seconds on a GeForce 8800GTX graphics card.
Cheaper Adjoints by Reversing Address Computations, in "Scientific Programming
, 2007
"... The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which tr ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields and we show use cases in practical applications.
Development and first applications of TAC++, in
 Utke (Eds.), Advances in Automatic Differentiation
, 2008
"... Summary. The paper describes the development of the software tool Transformation of Algorithms in C++ (TAC++) for automatic differentiation (AD) of C(++) codes by sourcetosource translation. We have transferred to TAC++ a subset of the algorithms from its wellestablished Fortran equivalent, Transf ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Summary. The paper describes the development of the software tool Transformation of Algorithms in C++ (TAC++) for automatic differentiation (AD) of C(++) codes by sourcetosource translation. We have transferred to TAC++ a subset of the algorithms from its wellestablished Fortran equivalent, Transformation of Algorithms in Fortran (TAF). TAC++ features forward and reverse as well as scalar and vector modes of AD. Efficient higher order derivative code is generated by multiple application of TAC++. High performance of the generated derivate code is demonstrated for five examples from application fields covering remote sensing, computer vision, computational finance, and aeronautics. For instance, the run time of the adjoints for simultaneous evaluation of the function and its gradient is between 1.9 and 3.9 times slower than that of the respective function codes. Options for further enhancement are discussed.
Hybrid static/dynamic activity analysis
 In Proceedings of the 3rd International Workshop on Automatic Differentiation Tools and Applications (ADTA’04
, 2006
"... Abstract. In forward mode Automatic Differentiation, the derivative program computes a function f and its derivatives, f ′. Activity analysis is important for AD. Our results show that when all variables are active, the runtime checks required for dynamic activity analysis incur a significant overhe ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In forward mode Automatic Differentiation, the derivative program computes a function f and its derivatives, f ′. Activity analysis is important for AD. Our results show that when all variables are active, the runtime checks required for dynamic activity analysis incur a significant overhead. However, when as few as half of the input variables are inactive, dynamic activity analysis enables an average speedup of 28 % on a set of benchmark problems. We investigate static activity analysis combined with dynamic activity analysis as a technique for reducing the overhead of dynamic activity analysis. 1
Control flow reversal for adjoint code generation
 In Proceedings of the Fourth IEEE International Workshop on Source Code Analysis and Manipulation (SCAM 2004
, 2004
"... We describe an approach to the reversal of the control flow of structured programs. It is used to automatically generate adjoint code for numerical programs by semantic source transformation. After a short introduction to applications and the implementation tool set, we describe the building blocks ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We describe an approach to the reversal of the control flow of structured programs. It is used to automatically generate adjoint code for numerical programs by semantic source transformation. After a short introduction to applications and the implementation tool set, we describe the building blocks using a simple example. We then illustrate the code reversal within basic blocks. The main part of the paper covers the reversal of structured control flow graphs. We show the algorithmic steps for simple branches and loops and give a detailed algorithm for the reversal of arbitrary combinations of loops and branches in a general control flow graph. 1
Adjoint code by source transformation with OpenAD/F
, 2006
"... This document reports on recent advances in the development of the adjoint code generator OpenAD/F. We give an overview of the software design, and we discuss case studies that illustrate the feasibility of adjoint code generation. Our main target application is the MIT General Circulation Model — ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This document reports on recent advances in the development of the adjoint code generator OpenAD/F. We give an overview of the software design, and we discuss case studies that illustrate the feasibility of adjoint code generation. Our main target application is the MIT General Circulation Model — a numerical model designed for study of the atmosphere, ocean, and climate.
Linearity Analysis for Automatic Differentiation
"... Linearity analysis determines which variables depend on which other variables and whether the dependence is linear or nonlinear. One of the many applications of this analysis is determining whether a loop involves only linear loopcarried dependences and therefore the adjoint of the loop may be rev ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Linearity analysis determines which variables depend on which other variables and whether the dependence is linear or nonlinear. One of the many applications of this analysis is determining whether a loop involves only linear loopcarried dependences and therefore the adjoint of the loop may be reversed and fused with the computation of the original function. This paper specifies the dataflow equations that compute linearity analysis. In addition, the paper describes using linearity analysis with array dependence analysis to determine whether a loopcarried dependence is linear or nonlinear.