Results 1  10
of
19
Numerica: a Modeling Language for Global Optimization
, 1997
"... Introduction Many science and engineering applications require the user to find solutions to systems of nonlinear constraints over real numbers or to optimize a nonlinear function subject to nonlinear constraints. This includes applications such the modeling of chemical engineering processes and of ..."
Abstract

Cited by 170 (11 self)
 Add to MetaCart
Introduction Many science and engineering applications require the user to find solutions to systems of nonlinear constraints over real numbers or to optimize a nonlinear function subject to nonlinear constraints. This includes applications such the modeling of chemical engineering processes and of electrical circuits, robot kinematics, chemical equilibrium problems, and design problems (e.g., nuclear reactor design). The field of global optimization is the study of methods to find all solutions to systems of nonlinear constraints and all global optima to optimization problems. Nonlinear problems raise many issues from a computation standpoint. On the one hand, deciding if a set of polynomial constraints has a solution is NPhard. In fact, Canny [ Canny, 1988 ] and Renegar [ Renegar, 1988 ] have shown that the problem is in PSPACE and it is not known whether the problem lies in NP. Nonlinear programming problems can be so hard that some methods are designed only to solve probl
GLOPT  A Program for Constrained Global Optimization
 Developments in Global Optimization
, 1996
"... . GLOPT is a Fortran77 program for global minimization of a blockseparable objective function subject to bound constraints and blockseparable constraints. It finds a nearly globally optimal point that is near a true local minimizer. Unless there are several local minimizers that are nearly global, ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
. GLOPT is a Fortran77 program for global minimization of a blockseparable objective function subject to bound constraints and blockseparable constraints. It finds a nearly globally optimal point that is near a true local minimizer. Unless there are several local minimizers that are nearly global, we thus find a good approximation to the global minimizer. GLOPT uses a branch and bound technique to split the problem recursively into subproblems that are either eliminated or reduced in their size. This is done by an extensive use of the block separable structure of the optimization problem. In this paper we discuss a new reduction technique for boxes and new ways for generating feasible points of constrained nonlinear programs. These are implemented as the first stage of our GLOPT project. The current implementation of GLOPT uses neither derivatives nor simultaneous information about several constraints. Numerical results are already encouraging. Work on an extension using curvature inf...
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
VariablePrecision, Interval Arithmetic Processors
"... This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors sup ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This chapter presents the design and analysis of variableprecision, interval arithmetic processors. The processors give the user the ability to specify the precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. The processors support a wide variety of arithmetic operations on variableprecision floating point numbers and intervals. Efficient hardware algorithms and specially designed functional units increase the speed, accuracy, and reliability of numerical computations. Area and delay estimates indicate that the processors can be implemented with areas and cycle times that are comparable to conventional IEEE doubleprecision floating point coprocessors. Execution time estimates indicate that the processors are two to three orders of magnitude faster than a conventional software package for variableprecision, interval arithmetic. 1.1 INTRODUCTION Floating point arithmetic provides a highspeed method for perform...
Exclusion Regions for Systems of Equations
 SIAM J. NUM. ANALYSIS
, 2003
"... Branch and bound methods for nding all zeros of a nonlinear system of equations in a box frequently have the diculty that subboxes containing no solution cannot be easily eliminated if there is a nearby zero outside the box. This has the eect that near each zero, many small boxes are created by repe ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Branch and bound methods for nding all zeros of a nonlinear system of equations in a box frequently have the diculty that subboxes containing no solution cannot be easily eliminated if there is a nearby zero outside the box. This has the eect that near each zero, many small boxes are created by repeated splitting, whose processing may dominate the total work spent on the global search. This paper discusses the reasons for the occurrence of this socalled cluster eect, and how to reduce the cluster eect by de ning exclusion regions around each zero found, that are guaranteed to contain no other zero and hence can safely be discarded. Such exclusion regions are traditionally constructed using uniqueness tests based on the Krawczyk operator or the Kantorovich theorem. These results are reviewed; moreover, re nements are proved that signi cantly enlarge the size of the exclusion region. Existence and uniqueness tests are also given.
Test Results for an Interval Branch and Bound Algorithm for EqualityConstrained Optimization
 In: Computational Methods and Applications, Kluwer
, 1995
"... . Various techniques have been proposed for incorporating constraints in interval branch and bound algorithms for global optimization. However, few reports of practical experience with these techniques have appeared to date. Such experimental results appear here. The underlying implementation includ ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
. Various techniques have been proposed for incorporating constraints in interval branch and bound algorithms for global optimization. However, few reports of practical experience with these techniques have appeared to date. Such experimental results appear here. The underlying implementation includes use of an approximate optimizer combined with a careful tesselation process and rigorous verification of feasibility. The experiments include comparison of methods of handling bound constraints and comparison of two methods for normalizing Lagrange multipliers. Selected test problems from the Floudas / Pardalos monograph are used, as well as selected unconstrained test problems appearing in reports of interval branch and bound methods for unconstrained global optimization. Keywords: constrained global optimization, verified computations, interval computations, bound constraints, experimental results 1. Introduction We consider the constrained global optimization problem minimize OE(X) s...
Where to Bisect a Box? A Theoretical Explanation of the Experimental Results
 Interval Computations and its Applications to Reasoning Under Uncertainty, Knowledge Representation, and Control Theory. Proceedings of MEXICONâ€™98, Workshop on Interval Computations, 4th World Congress on Expert Systems
, 1997
"... this paper, we show that (under certain reasonable assumptions) natural conditions really lead to Ratz's bisection. ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
this paper, we show that (under certain reasonable assumptions) natural conditions really lead to Ratz's bisection.
On Proving Existence of Feasible Points in Equality Constrained Optimization Problems
 Mathematical Programming
, 1995
"... Various algorithms can compute approximate feasible points or approximate solutions to equality and bound constrained optimization problems. In exhaustive search algorithms for global optimizers and other contexts, it is of interest to construct bounds around such approximate feasible points, then t ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Various algorithms can compute approximate feasible points or approximate solutions to equality and bound constrained optimization problems. In exhaustive search algorithms for global optimizers and other contexts, it is of interest to construct bounds around such approximate feasible points, then to verify (computationally but rigorously) that an actual feasible point exists within these bounds. Hansen and others have proposed techniques for proving the existence of feasible points within given bounds, but practical implementations have not, to our knowledge, previously been described. Various alternatives are possible in such an implementation, and details must be carefully considered. Also, in addition to Hansen's technique for handling the underdetermined case, it is important to handle the overdetermined case, when the approximate feasible point corresponds to a point with many active bound constraints. The basic ideas, along with experimental results from an actual implementation...
Software And Hardware Techniques For Accurate, SelfValidating Arithmetic
, 1996
"... The need for accurate and reliable numerical applications has led to the development of several software tools and hardware designs for accurate, selfvalidating arithmetic. Software tools include variableprecision software packages, interval arithmetic libraries, scientific programming languages, ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The need for accurate and reliable numerical applications has led to the development of several software tools and hardware designs for accurate, selfvalidating arithmetic. Software tools include variableprecision software packages, interval arithmetic libraries, scientific programming languages, computer algebra systems, and numerical problem solving environments. Hardware designs include coprocessors that support the directed rounding modes and exact dot products, variableprecision integer and floating point processors, and coprocessors for variableprecision, interval arithmetic. In this survey, we examine various software and hardware techniques for accurate, selfvalidating arithmetic and discuss their strengths and limitations. We also discuss numerical applications that employ these tools to produce accurate and reliable results. 1 INTRODUCTION Advances in VLSI technology, parallel processing, and computer architecture have led to increasingly faster digital computers. Duri...
On Verifying Feasibility in Equality Constrained Optimization Problems
, 1996
"... Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs f ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Techniques for verifying feasibility of equality constraints are presented. The underlying verification procedures are similar to a proposed algorithm of Hansen, but various possibilities, as well as additional procedures for handling bound constraints, are investigated. The overall scheme differs from some algorithms in that it rigorously verifies exact (rather than approximate) feasibility. The scheme starts with an approximate feasible point, then constructs a box (i.e. a set of tolerances) about this point within which it is rigorously verified that a feasible point exists. Alternate ways of proceeding are compared, and numerical results on a set of test problems appear.