Results 1  10
of
28
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
GloptLab, a configurable framework for the rigorous global solution of quadratic constraint satisfaction problems
"... solution of quadratic constraint satisfaction problems ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
solution of quadratic constraint satisfaction problems
Validated linear relaxations and preprocessing: Some experiments, 2003. accepted for publication in
 SIAM J. Optim
"... Abstract. Based on work originating in the early 1970s, a number of recent global optimization algorithms have relied on replacing an original nonconvex nonlinear program by convex or linear relaxations. Such linear relaxations can be generated automatically through an automatic differentiation proc ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. Based on work originating in the early 1970s, a number of recent global optimization algorithms have relied on replacing an original nonconvex nonlinear program by convex or linear relaxations. Such linear relaxations can be generated automatically through an automatic differentiation process. This process decomposes the objective and constraints (if any) into convex and nonconvex unary and binary operations. The convex operations can be approximated arbitrarily well by appending additional constraints, while the domain must somehow be subdivided (in an overall branchandbound process or in some other local process) to handle nonconvex constraints. In general, a problem can be hard if even a single nonconvex term appears. However, certain nonconvex terms lead to easiertosolve problems than others. Recently, Neumaier, Lebbah, Michel, ourselves, and others have paved the way to utilizing such techniques in a validated context. In this paper, we present a symbolic preprocessing step that provides a measure of the intrinsic difficulty of a problem. Based on this step, one of two methods can be chosen to relax nonconvex terms. This preprocessing step is similar to a method previously proposed by Epperly and Pistikopoulos [J. Global Optim., 11 (1997), pp. 287–311] for determining subspaces in which to branch, but we present it from a different point of view that is amenable to simplification of the problem presented to the linear programming solver, and within a validated context. Besides an illustrative example, we have implemented general relaxations in a validated context, as well as the preprocessing technique, and we present experiments on a standard test set. Finally, we present conclusions.
Transposition theorems and qualificationfree optimality conditions
 SIAM J. Optimization
"... Abstract. New theorems of the alternative for polynomial constraints (based on the Positivstellensatz from real algebraic geometry) and for linear constraints (generalizing the transposition theorems of Motzkin and Tucker) are proved. Based on these, two KarushJohn optimality conditions – holding w ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. New theorems of the alternative for polynomial constraints (based on the Positivstellensatz from real algebraic geometry) and for linear constraints (generalizing the transposition theorems of Motzkin and Tucker) are proved. Based on these, two KarushJohn optimality conditions – holding without any constraint qualification – are proved for single or multiobjective constrained optimization problems. The first condition applies to polynomial optimization problems only, and gives for the first time necessary and sufficient global optimality conditions for polynomial problems. The second condition applies to smooth local optimization problems and strengthens known local conditions. If some linear or concave constraints are present, the new version reduces the number of constraints for which a constraint qualification is needed to get the KuhnTucker conditions.
The Optimization Test Environment
"... Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as m ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as much of the procedure as possible. Keywords: test environment, optimization, solver benchmarking, solver comparison The testing procedure typically consists of three basic tasks: a) organize test problem sets, also called test libraries; b) solve selected test problems with selected solvers; c) analyze, check and compare the results. The Test Environment is a graphical user interface (GUI) that enables to manage the tasks a) and b) interactively, and task c) automatically. The Test Environment is particularly designed for users who seek to 1. adjust solver parameters, or 2. compare solvers on single problems, or 3. evaluate solvers on suitable test sets.
Improved and simplified validation of feasible points: Inequality and equality constrained problems
 Mathematical Programming, submitted
, 2005
"... Abstract. In validated branch and bound algorithms for global optimization, upper bounds on the global optimum are obtained by evaluating the objective at an approximate optimizer; the upper bounds are then used to eliminate subregions of the search space. For constrained optimization, in general, a ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. In validated branch and bound algorithms for global optimization, upper bounds on the global optimum are obtained by evaluating the objective at an approximate optimizer; the upper bounds are then used to eliminate subregions of the search space. For constrained optimization, in general, a small region must be constructed within which existence of a feasible point can be proven, and an upper bound on the objective over that region is obtained. We had previously proposed a perturbation technique for constructing such a region. In this work, we propose a much simplified and improved technique, based on an orthogonal decomposition of the normal space to the constraints. In purely inequality constrained problems, a point, rather than a region, can be used, and, for equality and inequality constrained problems, the region lies in a smallerdimensional subspace, giving rise to sharper upper bounds. Numerical experiments on published test sets for global optimization provide evidence of the superiority of the new approach within our GlobSol environment. 1.
Capabilities of Constraint Programming in Safe Global Optimization ∗†‡
"... We investigate the capabilities of constraints programming techniques in rigorous global optimization methods. We introduce different constraint programming techniques to reduce the gap between efficient but unsafe systems like Baron 1, and safe but slow global optimization approaches. We show how c ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We investigate the capabilities of constraints programming techniques in rigorous global optimization methods. We introduce different constraint programming techniques to reduce the gap between efficient but unsafe systems like Baron 1, and safe but slow global optimization approaches. We show how constraint programming filtering techniques can be used to implement optimalitybased reduction in a safe and efficient way, and thus to take advantage of the known bounds of the objective function to reduce the domain of the variables, and to speed up the search of a global optimum. We describe an efficient strategy to compute very accurate approximations of feasible points. This strategy takes advantage of the Newton method for underconstrained systems of equalities and inequalities to compute efficiently a promising upper bound. Experiments on the COCONUT benchmarks demonstrate that these different techniques drastically improve the performances. ∗This paper is an extended version of [17]. Preliminary results have been published in [10] and [5].
Improving interval enclosures
, 2009
"... This paper serves as background information for the Vienna proposal for interval standardization, explaining what is needed in practice to make competent use of the interval arithmetic provided by an implementation of the standard to be. Discussed are methods to improve the quality of interval encl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper serves as background information for the Vienna proposal for interval standardization, explaining what is needed in practice to make competent use of the interval arithmetic provided by an implementation of the standard to be. Discussed are methods to improve the quality of interval enclosures of the range of a function over a box, considerations of possible hardware support facilitating the implementation of such methods, and the results of a simple interval challenge that I had posed to the reliable computing mailing list on November 26, 2008. Also given is an example of a bound constrained global optimization problem in 4 variables that has a 2dimensional continuum of global minimizers. This makes standard branch and bound codes extremely slow, and therefore may serve as a useful degenerate test problem.
Global Nonlinear Programming with possible infeasibility and finite termination
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results thatguaranteeoptimalityand/orfeasibilityuptoanyrequiredprecisionwillbeprovided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
A review of the Global Optimization Toolbox
, 2006
"... Global optimization is aimed at finding the best solution of a constrained nonlinear optimization problem by performing a complete search over the set of feasible solutions. In contrast with local optimization, a complete search exhaustively checks the entire feasible region. For a comprehensive up ..."
Abstract
 Add to MetaCart
Global optimization is aimed at finding the best solution of a constrained nonlinear optimization problem by performing a complete search over the set of feasible solutions. In contrast with local optimization, a complete search exhaustively checks the entire feasible region. For a comprehensive uptodate archive of online information on global optimization, see [6]. As surveyed in [7], there are numerous mathematical and engineering problem classes for which a complete search is required. An example is the 300 year old Kepler problem of finding the densest packing of equal spheres in 3dimensional Euclidean space, for which a computerassisted proof is proposed by T. C. Hales [4]. The proof consists in reducing the problem to several thousands of linear programs and using interval calculations to ensure rigorous handling of rounding errors when establishing correctness of inequalities. Many other famous difficult optimization problems, such as the traveling salesman problem and the protein folding problem, are global optimization problems. The Global Optimization Toolbox (GOT for short), first released in June 2004 with Maple 9.5, is part of the Maple Professional Toolbox series of addon products that must be