Results 1  10
of
39
Numerica: a Modeling Language for Global Optimization
, 1997
"... Introduction Many science and engineering applications require the user to find solutions to systems of nonlinear constraints over real numbers or to optimize a nonlinear function subject to nonlinear constraints. This includes applications such the modeling of chemical engineering processes and of ..."
Abstract

Cited by 170 (11 self)
 Add to MetaCart
Introduction Many science and engineering applications require the user to find solutions to systems of nonlinear constraints over real numbers or to optimize a nonlinear function subject to nonlinear constraints. This includes applications such the modeling of chemical engineering processes and of electrical circuits, robot kinematics, chemical equilibrium problems, and design problems (e.g., nuclear reactor design). The field of global optimization is the study of methods to find all solutions to systems of nonlinear constraints and all global optima to optimization problems. Nonlinear problems raise many issues from a computation standpoint. On the one hand, deciding if a set of polynomial constraints has a solution is NPhard. In fact, Canny [ Canny, 1988 ] and Renegar [ Renegar, 1988 ] have shown that the problem is in PSPACE and it is not known whether the problem lies in NP. Nonlinear programming problems can be so hard that some methods are designed only to solve probl
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in ..."
Abstract

Cited by 101 (7 self)
 Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Interval arithmetic: From principles to implementation
 J. ACM
"... We start with a mathematical definition of a real interval as a closed, connected set of reals. Interval arithmetic operations (addition, subtraction, multiplication and division) are likewise defined mathematically and we provide algorithms for computing these operations assuming exact real arithme ..."
Abstract

Cited by 76 (12 self)
 Add to MetaCart
We start with a mathematical definition of a real interval as a closed, connected set of reals. Interval arithmetic operations (addition, subtraction, multiplication and division) are likewise defined mathematically and we provide algorithms for computing these operations assuming exact real arithmetic. Next, we define interval arithmetic operations on intervals with IEEE 754 floating point endpoints to be sound and optimal approximations of the real interval operations and we show that the IEEE standardâ€™s specification of operations involving the signed infinities, signed zeros, and the exact/inexact flag are such as to make a correct and optimal implementation more efficient. From the resulting theorems we derive data that are sufficiently detailed to convert directly to a program for efficiently implementing the interval operations. Finally we extend these results to the case of general intervals, which are defined as connected sets of reals that are not necessarily closed. 1
Subdivision Direction Selection In Interval Methods For Global Optimization
 SIAM J. Numer. Anal
, 1997
"... . The role of the interval subdivision selection rule is investigated in branchandbound algorithms for global optimization. The class of rules that allow convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A ..."
Abstract

Cited by 46 (18 self)
 Add to MetaCart
. The role of the interval subdivision selection rule is investigated in branchandbound algorithms for global optimization. The class of rules that allow convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A numerical study with a wide spectrum of test problems indicates that there are substantial differences between the rules in terms of the required CPU time, the number of function and derivative evaluations and space complexity, and two rules can provide substantial improvements in efficiency. Key words. global optimization, interval arithmetic, interval subdivision AMS subject classifications. 65K05, 90C30 Abbreviated title: Subdivision directions in interval methods. 1. Introduction. Interval subdivision methods for global optimization [7, 21] aim at providing reliable solutions to global optimization problems min x2X f(x) (1) where the objective function f : IR n ! IR is continuo...
On the Selection of Subdivision Directions in Interval BranchandBound Methods for Global Optimization
 J. Global Optimization
, 1995
"... . This paper investigates the influence of the interval subdivision selection rule on the convergence of interval branchandbound algorithms for global optimization. For the class of rules that allows convergence, we study the effects of the rules on a model algorithm with special list ordering. Fo ..."
Abstract

Cited by 30 (13 self)
 Add to MetaCart
. This paper investigates the influence of the interval subdivision selection rule on the convergence of interval branchandbound algorithms for global optimization. For the class of rules that allows convergence, we study the effects of the rules on a model algorithm with special list ordering. Four different rules are investigated in theory and in practice. A wide spectrum of test problems is used for numerical tests indicating that there are substantial differences between the rules with respect to the required CPU time, the number of function and derivative evaluations, and the necessary storage space. Two rules can provide considerable improvements in efficiency for our model algorithm. Keywords: Global optimization, interval arithmetic, branchandbound, interval subdivision 1. Introduction The investigated class of interval branchandbound methods for global optimization [7], [8], [19] addresses the problem of finding guaranteed and reliable solutions of global optimization...
A Review Of Techniques In The Verified Solution Of Constrained Global Optimization Problems
, 1996
"... Elements and techniques of stateoftheart automatically verified constrained global optimization algorithms are reviewed, including a description of ways of rigorously verifying feasibility for equality constraints and a careful consideration of the role of active inequality constraints. Previousl ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
Elements and techniques of stateoftheart automatically verified constrained global optimization algorithms are reviewed, including a description of ways of rigorously verifying feasibility for equality constraints and a careful consideration of the role of active inequality constraints. Previously developed algorithms and general work on the subject are also listed. Limitations of present knowledge are mentioned, and advice is given on which techniques to use in various contexts. Applications are discussed. 1 INTRODUCTION, BASIC IDEAS AND LITERATURE We consider the constrained global optimization problem minimize OE(X) subject to c i (X) = 0; i = 1; : : : ; m (1.1) a i j x i j b i j ; j = 1; : : : ; q; where X = (x 1 ; : : : ; xn ) T . A general constrained optimization problem, including inequality constraints g(X) 0 can be put into this form by introducing slack variables s, replacing by s + g(X) = 0, and appending the bound constraint 0 s ! 1; see x2.2. 2 Chapter 1 W...
Analytic Constraint Solving and Interval Arithmetic
 In POPLâ€™00 ACM SIGPLANSIGACT Symposium on Principles of Programming Languages
, 1999
"... In this paper we describe the syntax, semantics, and implementation of the constraint logic programming language CLP(F) and we prove that the implementation is sound. This language is an example of a new approach to scientific programming which we call analytic constraint logic programming (ACLP). T ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
In this paper we describe the syntax, semantics, and implementation of the constraint logic programming language CLP(F) and we prove that the implementation is sound. This language is an example of a new approach to scientific programming which we call analytic constraint logic programming (ACLP). The idea behind ACLP is that it provides an intervalbased constraint language in which higher order mathematical objects (e.g. ODEs, PDEs, function transforms, etc.) can be used to define scientifically interesting constraints on real numbers. All real numbers are associated to intervals (initially [\Gamma1; 1]), and the goal of an ACLP constraint solver is to narrow those intervals without removing any solutions to the specified ACLP constraints. After describing the syntax and semantics of the constraint language for CLP(F) and giving several examples, we show how to convert these analytic constraints into second order interval arithmetic constraints. We then present an algorithm for solvi...
Numerical Validation of Solutions of Linear Complementarity Problems
 Numer. Math
, 1997
"... This paper proposes a validation method for solutions of linear complementarity problems. The validation procedure consists in two sufficient conditions that can be tested on a digital computer. If the first condition is satisfied then a given multidimensional interval centered at an approximate sol ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
This paper proposes a validation method for solutions of linear complementarity problems. The validation procedure consists in two sufficient conditions that can be tested on a digital computer. If the first condition is satisfied then a given multidimensional interval centered at an approximate solution of the problem is guaranteed to contain an exact solution. If the second condition is satisfied then the multidimensional interval is guaranteed to contain no exact solution. This study is based on the mean value theorem for absolutely continuous functions and the reformulation of linear complementarity problems as nonsmooth nonlinear systems of equations. 1 Introduction Linear Complementarity Problems (LCP) model many important problems in engineering, management and economics. Furthermore linear and quadratic programming problems can be written as LCP. Several algorithms have been developed for solving LCP [11, 21, 22, 25, 26, 31], but few validation methods have been studied to giv...
VariablePrecision, Interval Arithmetic Processors
"... This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors sup ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors support a wide variety of arithmetic operations on variableprecision floating point numbers and intervals. Efficient hardware algorithms and specially designed functional units increase the speed, accuracy, and reliability of numerical computations. Area and delay estimates indicate that the processors can be implemented with areas and cycle times that are comparable to conventional IEEE doubleprecision floating point coprocessors. Execution time estimates indicate that the processors are two to three orders of magnitude faster than a conventional software package for variableprecision, interval arithmetic. 1.1 INTRODUCTION Floating point arithmetic provides a highspeed method for perform...