Results 1 
7 of
7
A comparison of the SheraliAdams, LovászSchrijver and Lasserre relaxations for 01 programming
 Mathematics of Operations Research
, 2001
"... ..."
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Global Nonlinear Programming with possible infeasibility and finite termination
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results thatguaranteeoptimalityand/orfeasibilityuptoanyrequiredprecisionwillbeprovided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
Reachability Analysis of Polynomial Systems using Linear Programming Relaxations. ⋆
"... Abstract. In this paper we propose a new method for reachability analysis of the class of discretetime polynomial dynamical systems. Our work is based on the approach combining the use of template polyhedra and optimization [1, 2]. These problems are nonconvex and are therefore generally difficult ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. In this paper we propose a new method for reachability analysis of the class of discretetime polynomial dynamical systems. Our work is based on the approach combining the use of template polyhedra and optimization [1, 2]. These problems are nonconvex and are therefore generally difficult to solve exactly. Using the Bernstein form of polynomials, we define a set of equivalent problems which can be relaxed to linear programs. Unlike using affine lowerbound functions in [2], in this work we use piecewise affine lowerbound functions, which allows us to obtain more accurate approximations. In addition, we show that these bounds can be improved by increasing artificially the degree of the polynomials. This new method allows us to compute more accurately guaranteed overapproximations of the reachable sets of discretetime polynomial dynamical systems. We also show different ways to choose suitable polyhedral templates. Finally, we show the merits of our approach on several examples. 1
Global Optimization of Nonconvex . . .
 MATHEMATICAL PROGRAMMING
, 1998
"... The primary objective of this thesis is to develop and implement a global optimization algorithm to solve a class of nonconvex programming problems, and to test it using a collection of engineering design problem applications. The class of problems we consider involves the optimization of a general ..."
Abstract
 Add to MetaCart
The primary objective of this thesis is to develop and implement a global optimization algorithm to solve a class of nonconvex programming problems, and to test it using a collection of engineering design problem applications. The class of problems we consider involves the optimization of a general nonconvex factorable objective function over a feasible region that is restricted by a set of constraints, each of which is defined in terms of nonconvex factorable functions. Such problems find widespread applications in production planning, location and allocation, chemical process design and control, VLSI chip design, and numerous engineering design problems. This thesis offers a first comprehensive methodological development and implementation for determining a global optimal solution to such factorable programming problems. To solve this class of problems, we propose a branchandbound approach based on linear programming (LP= relaxations generated through various approximation schemes that utilize, for example, the MeanValue Theorem and Chebyshev interpolation polynomials, coordinated with a ReformulationLinearization Technique (RLT=. The initial stage of the lower bounding step generates a tight, nonconvex polynomial programming relaxation for the given problem. Subsequently, an LP relaxation is constructed for the resulting polynomial program via a suitable RLT procedure. The underlying motivation for these two steps is to generate a tight outer approximation of the convex envelope of the objective function over the convex hull of the feasible region. The bounding step is then integrated into a general branchand bound framework. The construction of the bounding polynomials and the node partitioning schemes are specially designed so that the gaps resulting from these ...
Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results that guarantee optimality and/or feasibility up to any required precision will be provided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2006
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration the method requires the εglobal minimization of the Augmented Lagrangian with simple constraints. Global convergence to an ..."
Abstract
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration the method requires the εglobal minimization of the Augmented Lagrangian with simple constraints. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented. Key words: deterministic global optimization, Augmented Lagrangians, nonlinear programming, algorithms, numerical experiments. 1