Results 1  10
of
55
A Fortran 90 Environment for Research and Prototyping of Enclosure Algorithms for Nonlinear Equations and Global Optimization
"... An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should b ..."
Abstract

Cited by 40 (19 self)
 Add to MetaCart
An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should be portable, easy to learn, use, and maintain, and sufficiently fast for some production work. The motivation, design principles, uses, and capabilities for this environment are outlined. The environment includes an interval data type, a symbolic form of automatic differentiation to obtain an internal representation for functions, a special technique to allow conditional branches with operator overloading and interval computations, and generic routines to give interval and noninterval function and derivative information. Some of these generic routines use a special version of the backward mode of automatic differentiation. The package also includes dynamic data structures for exhaustive sear...
Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems
, 2002
"... ..."
Parallel calculation of sensitivity derivatives for aircraft design using automatic differentiation
 AIAA/NASA/USAF/ISSMO SYMPOSIUM ON MULTIDISCIPLINARY ANALYSIS AND OPTIMIZATION, AIAA 944261
, 1994
"... Sensitivity derivative (SD) calculation via automatic differentiation typical of that required for the aerodynamic design of a transporttype aircraft is considered. Two ways of computing SDs via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicabilit ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
Sensitivity derivative (SD) calculation via automatic differentiation typical of that required for the aerodynamic design of a transporttype aircraft is considered. Two ways of computing SDs via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a CRAY YMP computer is compared with a coarsegrained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SDs are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60, with coupling between a wing grid generation program and a stateoftheart, 3D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the CRAY YMP implementation is much faster. As the number of design variables grows, however, the SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarsegrained parallel implementation also can be moved easily to a network of workstations.
Unifying BitWidth Optimisation for FixedPoint and FloatingPoint Designs
 In 12th Annual IEEE Symposium on FieldProgrammable Custom Computing Machines (FCCM04
, 2004
"... This paper presents a method that offers a uniform treatment for bitwidth optimisation of both fixedpoint and floatingpoint designs. Our work utilises automatic differentiation to compute the sensitivities of outputs to the bitwidth of the various operands in the design. This sensitivity analysis ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
This paper presents a method that offers a uniform treatment for bitwidth optimisation of both fixedpoint and floatingpoint designs. Our work utilises automatic differentiation to compute the sensitivities of outputs to the bitwidth of the various operands in the design. This sensitivity analysis enables us to explore and compare fixedpoint and floatingpoint implementation for a particular design. As a result we can automate the selection of the optimal number representation for each variable in a design to optimize area and performance. We implement our method in the BitSize tool targeting reconfigurable architectures, which takes userdefined constraints to direct the optimisation procedure. We illustrate our approach using applications such as raytracing and function approximation. 1
SmallSignal Circuit Analysis and Sensitivity Computations with the PVL Algorithm
 IEEE Trans. Circuits and SystemsII: Analog and Digital Signal Processing
, 1996
"... . We describe the application of the PVL algorithm to the smallsignal analysis of circuits, including sensitivity computations. The PVL algorithm is based on the efficient computation of the Pad'e approximation of the network transfer function via the Lanczos process. The numerical stability of the ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
. We describe the application of the PVL algorithm to the smallsignal analysis of circuits, including sensitivity computations. The PVL algorithm is based on the efficient computation of the Pad'e approximation of the network transfer function via the Lanczos process. The numerical stability of the algorithm permits the computation of the Pad'e approximation to any accuracy over a certain frequency range. We extend the algorithm to compute sensitivities of network transfer functions, their poles, and their zeros, with respect to arbitrary circuit parameters, with minimal additional computational cost. We demonstrate the implementation of our algorithm on circuit examples. 1 Introduction The process of analyzing analog circuits with full accounting of parasitic elements, interconnect analysis at the board or chip level, and numerous other circuitsimulation tasks often require the analysis of large linear networks. These networks can become extremely large, especially when circuits ar...
Diagrammatic Derivation of Gradient Algorithms for Neural Networks
 in Neural Computation
, 1994
"... Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple b ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagationthroughtime without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach. 1 Introduction Deriving the appropriate gradient descent algorithm for a new network architecture or system configuration normally involves brute force derivative calculations. For example, the celebrated backpropagation algorithm for training feedforward neural networks was derived by repeatedly applying chain rule expansions backward through the ne...
Jacobian code generated by source transformation and vertex elimination can be as efficient as handcoding
 ACM Transactions on Mathematical Software
, 2004
"... This paper presents the first extended set of results from EliAD, a sourcetransformation implementation of the vertexelimination Automatic Differentiation approach to calculating the Jacobians of functions defined by Fortran code (Griewank and Reese, Automatic Differentiation of Algorithms: Theory ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
This paper presents the first extended set of results from EliAD, a sourcetransformation implementation of the vertexelimination Automatic Differentiation approach to calculating the Jacobians of functions defined by Fortran code (Griewank and Reese, Automatic Differentiation of Algorithms: Theory, Implementation, and Application, 1991, pp. 126135). We introduce the necessary theory in terms of well known algorithms of numerical linear algebra applied to the linear, extended Jacobian system that prescribes the relationship between the derivatives of all variables in the function code. Using an example, we highlight the potential for numerical instability in vertexelimination. We describe the source transformation implementation of our tool EliAD and present results from 5 test cases, 4 of which are taken from the MINPACK2 collection (Averick et al, Report ANL/MCSTM150, 1992) and for which handcoded Jacobian codes are available. On 5 computer/compiler platforms, we show that the Jacobian code obtained by EliAD is as efficient as handcoded Jacobian code. It is also between 2 to 20 times more efficient than that produced by current, state of the art, Automatic Differentiation tools even when such tools make use of sophisticated techniques such as sparse Jacobian compression. We demonstrate the effectiveness of reverseordered preelimination from the (successively updated) extended Jacobian
Computing Hopf Bifurcations I
, 1993
"... This paper addresses the problems of detecting Hopf bifurcations in systems of ordinary differential equations and following curves of Hopf points in two parameter families of vector fields. The established approach to this problem relies upon augmenting the equilibrium condition so that a Hopf bifu ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This paper addresses the problems of detecting Hopf bifurcations in systems of ordinary differential equations and following curves of Hopf points in two parameter families of vector fields. The established approach to this problem relies upon augmenting the equilibrium condition so that a Hopf bifurcation occurs at an isolated, regular point of the extended system. We propose two new methods of this type, based on classical algebraic results regarding the roots of polynomial equations and properties of Kronecker products for matrices. In addition to their utility as augmented systems for use with standard Newtontype continuation methods, they are also particularly welladapted for solution by computer algebra techniques for vector fields of small or moderate dimension.
Automatic Differentiation And Spectral Projected Gradient Methods For Optimal Control Problems
, 1998
"... this paper is to show the application of these canonical formulas to optimal control processes being integrated by the RungeKutta family of numerical methods. There are many papers concerning numerical comparisions between automatic differentiation, finite differences and symbolic differentiation. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
this paper is to show the application of these canonical formulas to optimal control processes being integrated by the RungeKutta family of numerical methods. There are many papers concerning numerical comparisions between automatic differentiation, finite differences and symbolic differentiation. See, for example, [1, 2, 6, 7, 21] among others. Another objective is to test the behavior of the spectral projected gradient methods introduced in [5]. These methods combine the classical projected gradient with two recently developed ingredients in optimization: (i) the nonmonotone line search schemes of Grippo, Lampariello and Lucidi ([24]), and (ii) the spectral steplength (introduced by Barzilai and Borwein ([3]) and analyzed by Raydan ([30, 31])). This choice of the steplength requires little computational work and greatly speeds up the convergence of gradient methods. The numerical experiments presented in [5], showing the high performance of these fast and easily implementable methods, motivate us to combine the spectral projected gradient methods with automatic differentiation. Both tools are used in this work for the development of codes for numerical solution of optimal control problems. In Section 2 of this paper, we apply the canonical formulas to the discrete version of the optimal control problem. In Section 3, we give a concise survey about spectral projected gradient algorithms. Section 4 presents some numerical experiments. Some final remarks are presented in Section 5. 2 CANONICAL FORMULAS The basic optimal control problem can be described as follows: Let a process governed by a system of ordinary differential equations be dx(t) dt = f(x(t); u(t); ); T 0 t T f ; (1) where x : [T 0 ; T f ] ! IR nx , u : [T 0 ; T f ] ! U ` IR nu , U compact, and 2 V ...
ADMIT1: Automatic Differentiation and MATLAB Interface Toolbox
, 1997
"... This article provides an introduction to the design and usage of ADMIT1 ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This article provides an introduction to the design and usage of ADMIT1