Results 1  10
of
138
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists ..."
Abstract

Cited by 102 (7 self)
 Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
A Global Optimization Algorithm (GOP) for Certain Classes of Nonconvex NLPs : II. Application of Theory and Test Problems
 Engng
, 1990
"... In Part I (Floudas and Visweswaran, 1990), a deterministic global optimization approach was proposed for solving certain classes of nonconvex optimization problems. An algorithm, GOP, was presented for the rigorous solution of the problem through a series of primal and relaxed dual problems until th ..."
Abstract

Cited by 54 (21 self)
 Add to MetaCart
In Part I (Floudas and Visweswaran, 1990), a deterministic global optimization approach was proposed for solving certain classes of nonconvex optimization problems. An algorithm, GOP, was presented for the rigorous solution of the problem through a series of primal and relaxed dual problems until the upper and lower bounds from these problems converged to an fflglobal optimum. In this paper, theoretical results are presented for several classes of mathematical programming problems that include : (i) the general quadratic programming problem, (ii) quadratic programming problems with quadratic constraints, (iii) pooling and blending problems, and (iv) unconstrained and constrained optimization problems with polynomial terms in the objective function and/or constraints. For each class, a few examples are presented illustrating the approach. Keywords : Global Optimization, Quadratic Programming, Quadratic Constraints, Polynomial functions, Pooling and Blending Problems. Author to whom...
Subdivision Direction Selection In Interval Methods For Global Optimization
 SIAM J. Numer. Anal
, 1997
"... . The role of the interval subdivision selection rule is investigated in branchandbound algorithms for global optimization. The class of rules that allow convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A ..."
Abstract

Cited by 47 (18 self)
 Add to MetaCart
. The role of the interval subdivision selection rule is investigated in branchandbound algorithms for global optimization. The class of rules that allow convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A numerical study with a wide spectrum of test problems indicates that there are substantial differences between the rules in terms of the required CPU time, the number of function and derivative evaluations and space complexity, and two rules can provide substantial improvements in efficiency. Key words. global optimization, interval arithmetic, interval subdivision AMS subject classifications. 65K05, 90C30 Abbreviated title: Subdivision directions in interval methods. 1. Introduction. Interval subdivision methods for global optimization [7, 21] aim at providing reliable solutions to global optimization problems min x2X f(x) (1) where the objective function f : IR n ! IR is continuo...
A PrimalRelaxed Dual Global Optimization Approach
, 1993
"... A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide ..."
Abstract

Cited by 42 (19 self)
 Add to MetaCart
A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide valid upper and lower bounds respectively on the global optimum. Theoretical properties are presented which allow for a rigorous solution of the relaxed dual problem. Proofs of fflfinite convergence and fflglobal optimality are provided. The approach is shown to be particularly suited to (a) quadratic programming problems, (b) quadratically constrained problems, and (c) unconstrained and constrained optimization of polynomial and rational polynomial functions. The theoretical approach is illustrated through a few example problems. Finally, some further developments in the approach are briefly discussed.
A Fortran 90 Environment for Research and Prototyping of Enclosure Algorithms for Nonlinear Equations and Global Optimization
"... An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should b ..."
Abstract

Cited by 40 (19 self)
 Add to MetaCart
An environment for general research into and prototyping of algorithms for reliable constrained and unconstrained global nonlinear optimization and reliable enclosure of all roots of nonlinear systems of equations, with or without inequality constraints, is being developed. This environment should be portable, easy to learn, use, and maintain, and sufficiently fast for some production work. The motivation, design principles, uses, and capabilities for this environment are outlined. The environment includes an interval data type, a symbolic form of automatic differentiation to obtain an internal representation for functions, a special technique to allow conditional branches with operator overloading and interval computations, and generic routines to give interval and noninterval function and derivative information. Some of these generic routines use a special version of the backward mode of automatic differentiation. The package also includes dynamic data structures for exhaustive sear...
On the Selection of Subdivision Directions in Interval BranchandBound Methods for Global Optimization
 J. Global Optimization
, 1995
"... . This paper investigates the influence of the interval subdivision selection rule on the convergence of interval branchandbound algorithms for global optimization. For the class of rules that allows convergence, we study the effects of the rules on a model algorithm with special list ordering. Fo ..."
Abstract

Cited by 31 (13 self)
 Add to MetaCart
. This paper investigates the influence of the interval subdivision selection rule on the convergence of interval branchandbound algorithms for global optimization. For the class of rules that allows convergence, we study the effects of the rules on a model algorithm with special list ordering. Four different rules are investigated in theory and in practice. A wide spectrum of test problems is used for numerical tests indicating that there are substantial differences between the rules with respect to the required CPU time, the number of function and derivative evaluations, and the necessary storage space. Two rules can provide considerable improvements in efficiency for our model algorithm. Keywords: Global optimization, interval arithmetic, branchandbound, interval subdivision 1. Introduction The investigated class of interval branchandbound methods for global optimization [7], [8], [19] addresses the problem of finding guaranteed and reliable solutions of global optimization...
A Review Of Techniques In The Verified Solution Of Constrained Global Optimization Problems
, 1996
"... Elements and techniques of stateoftheart automatically verified constrained global optimization algorithms are reviewed, including a description of ways of rigorously verifying feasibility for equality constraints and a careful consideration of the role of active inequality constraints. Previousl ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
Elements and techniques of stateoftheart automatically verified constrained global optimization algorithms are reviewed, including a description of ways of rigorously verifying feasibility for equality constraints and a careful consideration of the role of active inequality constraints. Previously developed algorithms and general work on the subject are also listed. Limitations of present knowledge are mentioned, and advice is given on which techniques to use in various contexts. Applications are discussed. 1 INTRODUCTION, BASIC IDEAS AND LITERATURE We consider the constrained global optimization problem minimize OE(X) subject to c i (X) = 0; i = 1; : : : ; m (1.1) a i j x i j b i j ; j = 1; : : : ; q; where X = (x 1 ; : : : ; xn ) T . A general constrained optimization problem, including inequality constraints g(X) 0 can be put into this form by introducing slack variables s, replacing by s + g(X) = 0, and appending the bound constraint 0 s ! 1; see x2.2. 2 Chapter 1 W...
A linearthne algorithm that locates local extrema of a function of one ~zriable from interval measurement results
 Interval Computations
, 1993
"... The problem of locating local maxima and minima of a function from approximate measurement results is vital for many physical applications: In spectral analysis, chemical species are identified by locating local maxima of the spectra. In radioastronomy, sources of celestial radio emission, and their ..."
Abstract

Cited by 23 (18 self)
 Add to MetaCart
The problem of locating local maxima and minima of a function from approximate measurement results is vital for many physical applications: In spectral analysis, chemical species are identified by locating local maxima of the spectra. In radioastronomy, sources of celestial radio emission, and their subcomponents, are identified by locating local maxima of the measured brightness of the radio sky. Elementary particles are identified by locating local maxima of the experimental curves. In mathematical terms, we know n numbers x1 < · · · < xn, n values y1,..., yn, value ε> 0, and we know that the values ¯ f(xi) of the unknown function ¯ f(x) at the points xi belong to the intervals Ii = [y − i, y+ i], i = 1,..., n, where y − i = yi − ε and y + i = yi + ε. The set F of all the functions f(x) that satisfy this property can be considered as a function interval (this definition was, in essence, first proposed by R. Moore). We say that an interval I locates a local maximum if all functions f ∈ F attain a local maximum at some point from I. So, the problem is to generate intervals I1,..., Ik that locate local maxima. Evidently, if I locates a local maximum, then any bigger interval J ⊃ I also locates this maximum. We want to find the smallest possible location I. We propose an algorithm that finds the smallest possible locations in linear time (i.e., in time that is ≤ Cn for some C).