Results 1 
8 of
8
More AD of Nonlinear AMPL Models: Computing Hessian Information and Exploiting Partial Separability
 in Computational Differentiation: Applications, Techniques, and
, 1996
"... We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. The nonlinear expressions may be interpreted or, to gain some evaluation speed at the cost of increased preparation time, converted to Fortran or C. We have extended the interpretive scheme to evaluate Hessian (of Lagrangian) times vector. Detecting partially separable structure (sums of terms, each depending, perhaps after a linear transformation, on only a few variables) is of independent interest, as some solvers exploit this structure. It can be detected automatically by suitable "tree walks". Exploiting this structure permits an AD computation of the entire Hessian matrix by accumulating Hessian times vector computations for each term, and can lead to a much faster computation...
Extending an algebraic modeling language to support constraint programming
 INFORMS Journal on Computing
, 2001
"... Abstract. Although algebraic modeling languages are widely used in linear and nonlinear programming applications, their use for combinatorial or discrete optimization has largely been limited to developing integer linear programming models for solution by generalpurpose branchandbound procedures. ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. Although algebraic modeling languages are widely used in linear and nonlinear programming applications, their use for combinatorial or discrete optimization has largely been limited to developing integer linear programming models for solution by generalpurpose branchandbound procedures. Yet much of a modeling language’s underlying structure for expressing integer programs is equally useful for describing more general combinatorial optimization constructs. Constraint programming solvers offer an alternative approach to solving combinatorial optimization problems, in which natural combinatorial constructs are addressed directly within the solution procedure. Hence the growing popularity of constraint programming motivates a variety of extensions to algebraic modeling languages for the purpose of describing combinatorial problems and conveying them to solvers. We examine some of these language extensions along with the significant changes in solver interface design that they require. In particular, we describe how several useful combinatorial features have been added to the AMPL modeling language and how AMPL’s generalpurpose solver interface has been adapted accordingly. As an illustration of a solver connection, we provide examples from an AMPL driver for ILOG Solver. This work has been supported in part by Bell Laboratories and by grants DMI9414487
Semiautomatic Differentiation for Efficient Gradient Computations
"... This paper concerns ongoing work; it compares several implementations of backward AD, describes a simple operatoroverloading implementation specialized for gradient computations, and compares the implementations on some meshoptimization examples. Ideas from the specialized implementation could be ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper concerns ongoing work; it compares several implementations of backward AD, describes a simple operatoroverloading implementation specialized for gradient computations, and compares the implementations on some meshoptimization examples. Ideas from the specialized implementation could be used in fully general sourcetosource translators for C and C++
SymbolicAlgebraic Computations in a Modeling Language for Mathematical Programming
, 2000
"... This paper was written for the proceedings of a seminar on "Symbolicalgebraic ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper was written for the proceedings of a seminar on "Symbolicalgebraic
Automatically finding and exploiting partially separable structure in nonlinear programming problems,” Bell Laboratories
, 1996
"... Nonlinear programming problems often involve an objective and constraints that are partially separable — the sum of terms involving only a few variables (perhaps after a linear change of variables). This paper discusses finding and exploiting such structure in nonlinear programming problems expresse ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Nonlinear programming problems often involve an objective and constraints that are partially separable — the sum of terms involving only a few variables (perhaps after a linear change of variables). This paper discusses finding and exploiting such structure in nonlinear programming problems expressed symbolically in the AMPL modeling language. For some computations, such as computing Hessians by backwards automatic differentiation, exploiting partial separability can give significant speedups. Overview To set the context for this paper, it is necessary to talk about various aspects of nonlinear programming problems and automatic differentiation. Accordingly, it is convenient to begin with brief overviews of Newton’s method, nonlinear programming, and automatic differentiation. Since I report computational experience with problems expressed symbolically in the AMPL modeling language, a brief account of AMPL is also appropriate. In the initial overviews, I will omit most references. Newton’s Method for Nonlinear Equations Newton’s method is in some ways an ideal algorithm for solving systems of nonlinear equations. It is easily derived by a linearization argument, and it converges quickly when started close to a ‘‘strong’ ’ solution. As a simple example, Table 1 shows the sequence of residual errors for a squareroot iteration; note how the residuals are approximately squared in successive iterations (‘‘quadratic convergence’’). In more detail, if f: I R n →I R n is a differentiable mapping of real nspace to itself, then f (x + y) ∼ f (x) + f ′(x) (y − x), where f ′(x) is the Jacobian matrix of f at x, so if f ′(x) is nonsingular and y = x − f ′(x) − 1 f (x), then f (y) ∼ 0, which gives Newton’s method: (1) x k + 1 = x k − f ′(x k) − 1 f (x k). To carry out a step of Newton’s method, it is of course not necessary to explicitly form f ′(x k) − 1; rather it suffices to solve
Conveying Problem Structure from an Algebraic Modeling Language to Optimization Algorithms
"... : Optimization algorithms can exploit problem structures of various kinds, such as sparsity of derivatives, complementarity conditions, block structure, stochasticity, priorities for discrete variables, and information about piecewiselinear terms. Moreover, some algorithms deduce additional structur ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
: Optimization algorithms can exploit problem structures of various kinds, such as sparsity of derivatives, complementarity conditions, block structure, stochasticity, priorities for discrete variables, and information about piecewiselinear terms. Moreover, some algorithms deduce additional structural information that may help the modeler. We review and discuss some ways of conveying structure, with examples from our designs for the AMPL modeling language. We show in particular how "declared suffixes" provide a new and useful way to express structure and acquire solution information. 1. INTRODUCTION A modeling language can provide a useful way to express the elaborate optimization problems that often arise in practice. Many of these problems have structure that an optimization algorithm can exploit, such as sparsity of first and second derivatives, complementarity conditions, block structure, timedependent stochasticity, priorities for discrete variables, and information about piecew...
DESIGN PRINCIPLES AND NEW DEVELOPMENTS IN THE AMPL MODELING LANGUAGE
"... The design of the AMPL modeling language stresses naturalness of expressions, generality of iterating over sets, separation of model and data, ease of data manipulation, and automatic updating of derived values when fundamental values change. We show how such principles have guided the addition of d ..."
Abstract
 Add to MetaCart
The design of the AMPL modeling language stresses naturalness of expressions, generality of iterating over sets, separation of model and data, ease of data manipulation, and automatic updating of derived values when fundamental values change. We show how such principles have guided the addition of database access, complementarity modeling, and other language features.
Numerical Issues and Influences in the Design of Algebraic Modeling Languages for Optimization
"... The idea of a modeling language is to describe mathematical problems symbolically in a way that is familiar to people but that allows for processing by computer systems. In particular the concept of an algebraic modeling language, based on objective and constraint expressions in terms of decision va ..."
Abstract
 Add to MetaCart
The idea of a modeling language is to describe mathematical problems symbolically in a way that is familiar to people but that allows for processing by computer systems. In particular the concept of an algebraic modeling language, based on objective and constraint expressions in terms of decision variables, has proved to be valuable for a broad range of optimization and related problems. One modeling language can work with numerous solvers, each of which implements one or more optimization algorithms. The separation of model specification from solver execution is thus a key tenet of modeling language design. Nevertheless, several issues in numerical analysis that are critical to solvers are also important in implementations of modeling languages. Socalled presolve procedures, which tighten bounds with the aim of eliminating some variables and constraints, are numerical algorithms that require carefully chosen tolerances and can benefit from directed roundings. Correctly rounded binarydecimal conversion is valuable in portably conveying problem instances and in debugging. Further rounding options offer tradeoffs between accuracy, convenience, and readability in displaying