Results 1  10
of
27
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
The ADIFOR 2.0 System for the Automatic Differentiation of Fortran 77 Programs
 RICE UNIVERSITY
, 1994
"... Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 s ..."
Abstract

Cited by 55 (17 self)
 Add to MetaCart
Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 system consists of three main components: The ADIFOR 2.0 preprocessor, the ADIntrinsics Fortran 77 exceptionhandling system, and the SparsLinC library. The combination of these tools provides the ability to deal with arbitrary Fortran 77 syntax, to handle codes containing single and doubleprecision real or complexvalued data, to fully support and easily customize the translation of Fortran 77 intrinsics, and to transparently exploit sparsity in derivative computations. ADIFOR 2.0 has been successfully applied to a 60,000line code, which we believe to be a new record in automatic differentiation.
Interval Analysis on Directed Acyclic Graphs for Global Optimization
 J. Global Optimization
, 2004
"... A directed acyclic graph (DAG) representation of optimization problems represents each variable, each operation, and each constraint in the problem formulation by a node of the DAG, with edges representing the ow of the computation. ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
A directed acyclic graph (DAG) representation of optimization problems represents each variable, each operation, and each constraint in the problem formulation by a node of the DAG, with edges representing the ow of the computation.
Recent Progress in Unconstrained Nonlinear Optimization Without Derivatives
 MATHEMATICAL PROGRAMMING
, 1997
"... We present an introduction to a new class of derivative free methods for unconstrained optimization. We start by discussing the motivation for such methods and why they are in high demand by practitioners. We then review the past developments in this field, before introducing the features that ch ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
We present an introduction to a new class of derivative free methods for unconstrained optimization. We start by discussing the motivation for such methods and why they are in high demand by practitioners. We then review the past developments in this field, before introducing the features that characterize the newer algorithms. In the context of a trust region framework, we focus on techniques that ensure a suitable "geometric quality" of the considered models. We then outline the class of algorithms based on these techniques, as well as their respective merits. We finally conclude the paper with a discussion of open questions and perspectives.
Computation and Application of Taylor Polynomials with Interval Remainder Bounds
 Reliable Computing
, 1998
"... . The expansion of complicated functions of many variables in Taylor polynomials is an important problem for many applications, and in practice can be performed rather conveniently (even to high orders) using polynomial algebras. An important application of these methods is the field of beam physics ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
. The expansion of complicated functions of many variables in Taylor polynomials is an important problem for many applications, and in practice can be performed rather conveniently (even to high orders) using polynomial algebras. An important application of these methods is the field of beam physics, where often expansions in about six variables to orders between five and ten are used. However, often it is necessary to also know bounds for the remainder term of the Taylor formula if the arguments lie within certain intervals. In principle such bounds can be obtained by interval bounding of the (n+1)st derivative, which in turn can be obtained with polynomial algebra; but in practice the method is rather inefficient and susceptible to blowup because of the need of repeated interval evaluations of the derivative. Here we present a new method that allows the computation of sharp remainder intervals in parallel with the accumulation derivatives up to order n. The method is useful for a...
Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language
 In Advances in Probabilistic and Other Parsing
, 2005
"... Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives general agendabased algorithms for computing weights and gradients; briefly discusses DynatoDyna program transformati ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives general agendabased algorithms for computing weights and gradients; briefly discusses DynatoDyna program transformations; and shows that a first implementation of a DynatoC++ compiler produces code that is efficient enough for real NLP research, though still several times slower than handcrafted code. 1
TimeParallel Computation of PseudoAdjoints for a Leapfrog Scheme
 Preprint ANL/MCSP6390197, Mathematics and Computer Science Division, Argonne National Laboratory
, 1997
"... The leapfrog scheme is a commonly used secondorder difference scheme for solving differential equations. If Z(t) denotes the state of the system at time t, the leapfrog scheme computes the state at the next time step as Z(t + 1) = H(Z(t); Z(t \Gamma 1); W ), where H is the nonlinear timestepping ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
The leapfrog scheme is a commonly used secondorder difference scheme for solving differential equations. If Z(t) denotes the state of the system at time t, the leapfrog scheme computes the state at the next time step as Z(t + 1) = H(Z(t); Z(t \Gamma 1); W ), where H is the nonlinear timestepping operator and W are parameters that are not time dependent. In this article, we show how the associativity of the chain rule of differential calculus can be used to compute a socalled adjoint x T \Delta (dZ(t)=d[Z(0);W ]) efficiently in a parallel fashion. To this end, we (1) employ the reverse mode of automatic differentiation at the outermost level, (2) use a sparsityexploiting incarnation of the forward mode of automatic differentiation to compute derivatives of H at every time step, and (3) exploit chain rule associativity to compute derivatives at individual time steps in parallel. We report on experimental results with a 2D shallowwater equation model problem on an IBM SP parallel...
Cube Summing, Approximate Inference with NonLocal Features, and Dynamic Programming without Semirings
"... We introduce cube summing, a technique that permits dynamic programming algorithms for summing over structures (like the forward and inside algorithms) to be extended with nonlocal features that violate the classical structural independence assumptions. It is inspired by cube pruning (Chiang, 2007; ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We introduce cube summing, a technique that permits dynamic programming algorithms for summing over structures (like the forward and inside algorithms) to be extended with nonlocal features that violate the classical structural independence assumptions. It is inspired by cube pruning (Chiang, 2007; Huang and Chiang, 2007) in its computation of nonlocal features dynamically using scored kbest lists, but also maintains additional residual quantities used in calculating approximate marginals. When restricted to local features, cube summing reduces to a novel semiring (kbest+residual) that generalizes many of the semirings of Goodman (1999). When nonlocal features are included, cube summing does not reduce to any semiring, but is compatible with generic techniques for solving dynamic programming equations. 1
Automatic Differentiation as Nonarchimedean Analysis
, 1992
"... It is shown how the techniques of automatic differentiation can be viewed in a broader context as an application of analysis on a nonarchimedean field. The rings used in automatic differentiation can be ordered in a natural way and form finite dimensional real algebras which contain infinitesimals. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
It is shown how the techniques of automatic differentiation can be viewed in a broader context as an application of analysis on a nonarchimedean field. The rings used in automatic differentiation can be ordered in a natural way and form finite dimensional real algebras which contain infinitesimals. Some of these algebras can be extended to become a Cauchycomplete realclosed nonarchimedean field, which forms an infinite dimensional real vector space and is denoted by L. On this field, a calculus is developed. Rules of differentiation and certain fundamental theorems are discussed. A remarkable property of differentiation is that difference quotients with infinitely small differences yield the exact derivative up to an infinitely small error. This is of historical interest since it justifies the concept of derivatives as differential quotients. But it is also of practical relevance; it turns out that the algebraic operations used to compute derivatives in automatic differentiation are...