Results 1  10
of
33
CLP(Intervals) Revisited
, 1994
"... The design and implementation of constraint logic programming (CLP) languages over intervals is revisited. Instead of decomposing complex constraints in terms of simple primitive constraints as in CLP(BNR), complex constraints are manipulated as a whole, enabling more sophisticated narrowing procedu ..."
Abstract

Cited by 136 (19 self)
 Add to MetaCart
The design and implementation of constraint logic programming (CLP) languages over intervals is revisited. Instead of decomposing complex constraints in terms of simple primitive constraints as in CLP(BNR), complex constraints are manipulated as a whole, enabling more sophisticated narrowing procedures to be applied in the solver. This idea is embodied in a new CLP language Newton whose operational semantics is based on the notion of boxconsistency, an approximation of arcconsistency, and whose implementation uses Newton interval method. Experimental results indicate that Newton outperforms existing languages by an order of magnitude and is competitive with some stateoftheart tools on some standard benchmarks. Limitations of our current implementation and directions for further work are also identified.
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists ..."
Abstract

Cited by 112 (7 self)
 Add to MetaCart
(Show Context)
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Computing Large Sparse Jacobian Matrices Using Automatic Differentiation
 SIAM Journal on Scientific Computing
, 1993
"... The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool ..."
Abstract

Cited by 57 (30 self)
 Add to MetaCart
(Show Context)
The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool. We compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK2 test problem collection. Our conclusion is that automatic differentiation is the method of choice, leading to results that are as accurate as handcoded derivatives, while at the same time outperforming difference approximations in both accuracy and speed. COMPUTING LARGE SPARSE JACOBIAN MATRICES USING AUTOMATIC DIFFERENTIATION Brett M. Averick , Jorge J. Mor'e, Christian H. Bischof, Alan Carle , and Andreas Griewank 1 Introduction The solution of largescale nonlinear problems often requires the computation of the Jacobian matrix f 0 (x) of a mapping f : IR ...
Interval constraint logic programming
 CONSTRAINT PROGRAMMING: BASICS AND TRENDS, VOLUME 910 OF LNCS
, 1995
"... Abstract. In this paper, we present anoverview on the use of interval arithmetic to process numerical constraints in Constraint Logic Programming. The main principle is to approximate nary relations over IR with Cartesian products of intervals whose bounds are taken in a nite subset of I R.Variabl ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
Abstract. In this paper, we present anoverview on the use of interval arithmetic to process numerical constraints in Constraint Logic Programming. The main principle is to approximate nary relations over IR with Cartesian products of intervals whose bounds are taken in a nite subset of I R.Variables represent real values whose domains are intervals de ned in the same manner. Narrowing operators are de ned from approximations. These operators compute, from an interval and a relation, aset included in the initial interval. Sets of constraints are then processed thanks to a local consistency algorithm pruning at each stepvalues from initial intervals. This algorithm is shown to be correct and to terminate, on the basis of a certain number of properties of narrowing operators. We focus here on the description of the general framework based on approximations, on its application to interval constraint solving over continuous and discrete quantities, we establish a strong link between approximations and local consistency notions and show that arcconsistency is an instance of the approximation framework. We nally describe recentwork on di erent variants of the initial algorithm proposed by John Cleary and developed by W. Older and A. Vellino which havebeen proposed in this context. These variants address four particular points: generalization of the constraint language, improvement of domain reductions, e ciency of the computation and nally, cooperation with other solvers. Some open questions are also identi ed. 1
Computing gradients in largescale optimization using automatic differentiation
 INFORMS J. COMPUTING
, 1997
"... The accurate and ecient computation of gradients for partially separable functions is central to the solution of largescale optimization problems, since these functions are ubiquitous in largescale problems. We describe two approaches for computing gradients of partially separable functions via au ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
The accurate and ecient computation of gradients for partially separable functions is central to the solution of largescale optimization problems, since these functions are ubiquitous in largescale problems. We describe two approaches for computing gradients of partially separable functions via automatic differentiation. In our experiments we employ the ADIFOR (Automatic Differentiation of Fortran) tool and the SparsLinC (Sparse Linear Combination) library. We use applications from the MINPACK2 test problem collection to compare the numerical reliability and computational efficiency of these approaches with handcoded derivatives and approximations based on differences of function values. Our conclusion is that automatic differentiation is the method of choice, providing code for the efficient computation of the gradient without the need for tedious handcoding.
A software package for the numerical integration of ODE by means of highorder Taylor methods
, 2001
"... This paper revisits the Taylor method for the numerical integration of initial value problems of Ordinary Differential Equations (ODEs). The main issue is to present a computer program that, given a set of ODEs, produces the corresponding Taylor numerical integrator. The step size control adaptiv ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
This paper revisits the Taylor method for the numerical integration of initial value problems of Ordinary Differential Equations (ODEs). The main issue is to present a computer program that, given a set of ODEs, produces the corresponding Taylor numerical integrator. The step size control adaptively selects both order and step size to achieve a prescribed error, and trying to minimize the global number of operations. The package provides support for several extended precision arithmetics, including userdefined types. The paper
Automatic Differentiation Of Advanced CFD Codes For Multidisciplinary Design
 Journal on Computing Systems in Engineering
, 1992
"... This paper addresses one such synergism for computa ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
(Show Context)
This paper addresses one such synergism for computa
Surface Intersection Using Affine Arithmetic
 In Graphics Interface
, 1996
"... We describe a variant of a domain decomposition method proposed by Gleicher and Kass for intersecting and trimming parametric surfaces. Instead of using interval arithmetic to guide the decomposition, the variant described here uses affine arithmetic, a tool recently proposed for range analysis. Aff ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
We describe a variant of a domain decomposition method proposed by Gleicher and Kass for intersecting and trimming parametric surfaces. Instead of using interval arithmetic to guide the decomposition, the variant described here uses affine arithmetic, a tool recently proposed for range analysis. Affine arithmetic is similar to standard interval arithmetic, but takes into account correlations between operands and subformulas, generally providing much tighter bounds for the computed quantities. As a consequence, the quadtree domain decompositions are much smaller and the intersection algorithm runs faster. keywords: surface intersection, trimming surfaces, range analysis, interval analysis, CAGD.
ADIC  An extensible automatic dierentiation tool for ANSIC, Software{Practice and Experience
, 1997
"... 1 Abstract. In scientic computing, we often require the derivatives @f=@x of a function f expressed as a program with respect to some input parameter(s) x, say. Automatic dierentiation (AD) techniques augment the program with derivative computation by applying the chain rule of calculus to elementar ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
1 Abstract. In scientic computing, we often require the derivatives @f=@x of a function f expressed as a program with respect to some input parameter(s) x, say. Automatic dierentiation (AD) techniques augment the program with derivative computation by applying the chain rule of calculus to elementary operations in an automated fashion. This article introduces ADIC (Automatic Dierentiation of C), a new AD tool for ANSIC programs. ADIC is currently the only tool for ANSIC that employs a sourcetosource program transformation approach; that is, it takes a C code and produces a new C code that computes the original results as well as the derivatives. We rst present ADIC \by example " to illustrate the functionality and ease of use of ADIC and then describe in detail the architecture of ADIC. ADIC incorporates a modular design that provides a foundation for both rapid prototyping of better AD algorithms and their sharing across AD tools for dierent languages. A component architecture called AIF (Automatic Dierentiation Intermediate Form) separates core AD concepts from their languagespecic implementation and allows the development of generic AD modules that can be reused directly in other AIFbased AD tools. The languagespecic ADIC frontend and backend canonicalize C programs to make them t for semantic augmentation and manage, for example, the
Consistency Techniques in Ordinary Differential Equations
, 2000
"... This paper takes a fresh look at the application of interval analysis to ordinary differential equations and studies how consistency techniques can help address the accuracy problems typically exhibited by these methods, while trying to preserve their efficiency. It proposes to generalize interval t ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
This paper takes a fresh look at the application of interval analysis to ordinary differential equations and studies how consistency techniques can help address the accuracy problems typically exhibited by these methods, while trying to preserve their efficiency. It proposes to generalize interval techniques intoatwostep process: a forward process that computes an enclosure and a backward process that reduces this enclosure. Consistency techniques apply naturally to the backward (pruning) step but can also be applied to the forward phase. The paper describes the framework, studies the various steps in detail, proposes a number of novel techniques, and gives some preliminary experimental results to indicate the potential of this new research avenue.