Results 1  10
of
13
Achieving Logarithmic Growth Of Temporal And Spatial Complexity In Reverse Automatic Differentiation
, 1991
"... ..."
A coarse grid threedimensional global inverse model of the atmospheric transport 1. Adjoint model and Jacobian matrix
, 1996
"... . TM2 is a global threedimensional model of the atmospheric transport of passive tracers. The adjoint of TM2 is a model that allows the efficient evaluation of derivatives of the simulated tracer concentration at observational locations with respect to the tracer's sources and sinks. We describe ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
. TM2 is a global threedimensional model of the atmospheric transport of passive tracers. The adjoint of TM2 is a model that allows the efficient evaluation of derivatives of the simulated tracer concentration at observational locations with respect to the tracer's sources and sinks. We describe the generation of the adjoint model by applying the Tangent linear and Adjoint Model Compiler in the reverse mode of automatic differentiation to the code of TM2. Using CO 2 as an example of a chemically inert tracer, the simulated concentration at observational locations is linear in the surface exchange fluxes, and thus the transport can be represented by the model's Jacobian matrix. In many current inverse modeling studies, such a matrix has been computed by multiple runs of a transport model for a set of prescribed surface flux patterns. The computational cost has been proportional to the number of patterns. In contrast, for differentiation in reverse mode, the cost is independ...
Expansion and estimation of the range of nonlinear functions
 Math. Comp
, 1996
"... Abstract. Many verification algorithms use an expansion f(x) ∈ f(˜x)+S· (x−˜x), f: Rn → Rn for x ∈ X, where the set of matrices S is usually computed as a gradient or by means of slopes. In the following, an expansion scheme is described which frequently yields sharper inclusions for S. This allows ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. Many verification algorithms use an expansion f(x) ∈ f(˜x)+S· (x−˜x), f: Rn → Rn for x ∈ X, where the set of matrices S is usually computed as a gradient or by means of slopes. In the following, an expansion scheme is described which frequently yields sharper inclusions for S. This allows also to compute sharper inclusions for the range of f over a domain. Roughly speaking, f has to be given by means of a computer program. The process of expanding f can then be fully automatized. The function f need not be differentiable. For locally convex or concave functions special improvements are described. Moreover, in contrast to other methods, ˜x ∩ X may be empty without implying large overestimations for S. This may be advantageous in practical applications. 0. Notation We denote by IR the set of real intervals X ∈ IR ⇒ X = [inf(X), sup(X)] = { x ∈ R  inf(X) ≤ x ≤ sup(X)}. By PT we denote the power set over a given set T, and we use the canonical embedding IR ⊆ PR. Thesetofndimensional interval vectors is denoted by IRn, i.e., X ∈ IR n ⇒ X = { (xi) ∈ R n  xi ∈ Xi} with Xi ∈ IR, 1 ≤ i ≤ n. Interval vectors are compact. Interval operations and power set operations are defined in the usual way. Details can be found in standard books on interval analysis, among others [10, 2, 11]. If not explicitly noted otherwise, all operations are power set operations. 1. Expansion of nonlinear functions A differentiable function f: D ⊆ Rn → R can be locally expanded by its gradient. For ˜x ∈ D, X ⊆ D, and[g]∈IRn with ∇f(˜x ∪ X) ⊆ [g] there holds ∀ x ∈ X ∃ g ∈ [g] : f(x)−f(˜x)=g T (1.0) ·(x−˜x). Here, ∪ denotes the convex hull, and ∇f(˜x ∪ X) denotes the range of ∇f over ˜x ∪ X. The gradient, for real and for interval arguments, can be computed using automatic differentiation [4, 12]. This process is fully automatized. This approach has three disadvantages: (1) f needs to be differentiable,
GRADIENT: Algorithmic Differentiation in Maple
, 1993
"... Many scientific applications require computation of the derivatives of a function f : IR as well as the function values of f itself. All computer algebra systems can differentiate functions represented by formulae. But not all functions can be described by formulae. And formulae are not alway ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Many scientific applications require computation of the derivatives of a function f : IR as well as the function values of f itself. All computer algebra systems can differentiate functions represented by formulae. But not all functions can be described by formulae. And formulae are not always the most effective means for representing functions and derivatives.
Bayesian model updating using hybrid monte carlo simulation with application to structural dynamics models with many uncertain parameters. Journal of Engineering Mechanics. drillstring dynamics 142
"... Abstract: In recent years, Bayesian model updating techniques based on measured data have been applied to system identification of structures and to structural health monitoring. A fully probabilistic Bayesian model updating approach provides a robust and rigorous framework for these applications du ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: In recent years, Bayesian model updating techniques based on measured data have been applied to system identification of structures and to structural health monitoring. A fully probabilistic Bayesian model updating approach provides a robust and rigorous framework for these applications due to its ability to characterize modeling uncertainties associated with the underlying structural system and to its exclusive foundation on the probability axioms. The plausibility of each structural model within a set of possible models, given the measured data, is quantified by the joint posterior probability density function of the model parameters. This Bayesian approach requires the evaluation of multidimensional integrals, and this usually cannot be done analytically. Recently, some Markov chain Monte Carlo simulation methods have been developed to solve the Bayesian model updating problem. However, in general, the efficiency of these proposed approaches is adversely affected by the dimension of the model parameter space. In this paper, the Hybrid Monte Carlo method is investigated �also known as Hamiltonian Markov chain method�, and we show how it can be used to solve higherdimensional Bayesian model updating problems. Practical issues for the feasibility of the Hybrid Monte Carlo method to such problems are addressed, and improvements are proposed to make it more effective and efficient for solving such model updating problems. New formulae for Markov chain convergence assessment are derived. The effectiveness of the proposed approach for Bayesian model updating of structural dynamic models with many uncertain parameters is illustrated with a simulated data example involving a tenstory building that has 31 model parameters to be updated.
Automatic Differentiation and Bisection
 MapleTech, The Maple Technical Newsletter
, 1997
"... Automatic differentiation is a technique used for generating computer programs which compute the value of the derivative of a function given by an original computer program. This article shows that automatic differentiation may "fail" if it is applied to an iterative equation solver based on bisecti ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Automatic differentiation is a technique used for generating computer programs which compute the value of the derivative of a function given by an original computer program. This article shows that automatic differentiation may "fail" if it is applied to an iterative equation solver based on bisection, in particular, it fails to reproduce the user's expectations. A careful investigation of this behavior gives insight into the technique of automatic differentiation. Maple is of great help to explore this problem. Introduction Let f : IR n ! IR be a differentiable function and P a program which for given arguments computes the function value. By automatic differentiation P is transformed into a program P 0 which computes both the partial derivatives and the function value of f . The basic idea behind automatic differentiation is rather simple. The computation of the derivatives is obtained by applying the elementary rules of differentiation to each statement in the computation sche...
MO Mathematical Optimization
"... this paper. For example, output from testing Rosenbrock's function for 12 variables consists of the following: 20 X0 VECTOR: 1.20 1.001.20 1.00 1.20 1.001.20 1.00 1.20 1.001.20 1.00 Y VECTOR: 1.09 0.770.88 0.64 0.71 0.58 0.940.90 0.62 0.770.900.98 ENTERING TESTGH ROUTINE: THE FUNCTION V ..."
Abstract
 Add to MetaCart
this paper. For example, output from testing Rosenbrock's function for 12 variables consists of the following: 20 X0 VECTOR: 1.20 1.001.20 1.00 1.20 1.001.20 1.00 1.20 1.001.20 1.00 Y VECTOR: 1.09 0.770.88 0.64 0.71 0.58 0.940.90 0.62 0.770.900.98 ENTERING TESTGH ROUTINE: THE FUNCTION VALUE AT X = 1.45200000E+02 THE FIRSTORDER TAYLOR TERM, (G, Y) = 3.19353760E+02 THE SECONDORDER TAYLOR TERM, (Y,HY) = 5.39772665E+03 EPSMIN = 1.42108547E14 EPS F TAYLOR DIFF. RATIO 5.0000E01 1.09854129E+03 9.79592712E+02 1.18948574E+02 2.5000E01 4.07080835E+02 3.93717398E+02 1.33634374E+01 8.90104621E+00 1.2500E01 2.28865318E+02 2.27288959E+02 1.57635878E+00 8.47740855E+00 6.2500E02 1.75893210E+02 1.75702045E+02 1.91165417E01 8.24604580E+00 3.1250E02 1.57838942E+02 1.57815414E+02 2.35282126E02 8.12494428E+00 1.5625E02 1.50851723E+02 1.50848805E+02 2.91806005E03 8.06296382E+00 7.8125E03 1.47860040E+02 1.47859677E+02 3.63322099E04 8.03160629E+00 3.9063E03 1.46488702E+02 1.46488657E+02 4.53255493E05 8.01583443E+00 1.9531E03 1.45834039E+02 1.45834033E+02 5.66008660E06 8.00792506E+00 9.7656E04 1.45514443E+02 1.45514443E+02 7.07160371E07 8.00396463E+00 4.8828E04 1.45356578E+02 1.45356578E+02 8.83731524E08 8.00198196E+00 DIFF IS SMALL (LESS THAN 2.97291798E08 IN ABSOLUTE VALUE) Note that the RATIO is larger than eight when EPS is larger and then decreases steadily. A small error in the code would produce much different values. We encourage the student to try this testing routine on several subroutines that compute objective functions and their derivatives; errors should be introduced into the derivative codes systematically to examine the ability of TESTGH to detect them and provide the right diagnosis, as outlined above. Methods for Unconstrained Continuous...
Automatic Differentiation Applied to Unsaturated Flow ADOLC
, 1992
"... We have experimented with many variants of the code dual. c for twodimensional unsaturated flow in a porous medium. The goal has been to speed up the evaluation of derivatives required for a Newton iteration. We have primarily investigated the use of ADOLC, a C++ tool for automatic differentiation ..."
Abstract
 Add to MetaCart
We have experimented with many variants of the code dual. c for twodimensional unsaturated flow in a porous medium. The goal has been to speed up the evaluation of derivatives required for a Newton iteration. We have primarily investigated the use of ADOLC, a C++ tool for automatic differentiation and have come to the following conclusions: Three colors suffice for computing the nonlinear portion of the Jacobian. That speeds up the Jacobian evaluation in the original code by a factor of two.