Results 1  10
of
54
A comparison of the SheraliAdams, LovászSchrijver and Lasserre relaxations for 01 programming
 Mathematics of Operations Research
, 2001
"... ..."
A PrimalRelaxed Dual Global Optimization Approach
, 1993
"... A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide ..."
Abstract

Cited by 42 (19 self)
 Add to MetaCart
A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide valid upper and lower bounds respectively on the global optimum. Theoretical properties are presented which allow for a rigorous solution of the relaxed dual problem. Proofs of fflfinite convergence and fflglobal optimality are provided. The approach is shown to be particularly suited to (a) quadratic programming problems, (b) quadratically constrained problems, and (c) unconstrained and constrained optimization of polynomial and rational polynomial functions. The theoretical approach is illustrated through a few example problems. Finally, some further developments in the approach are briefly discussed.
Generalized Lagrangian Duals and Sums of Squares Relaxations of Sparse Polynomial Optimization Problems
, 2004
"... Sequences of generalized Lagrangian duals and their SOS (sums of squares of polynomials) relaxations for a POP (polynomial optimization problem) are introduced. Sparsity of polynomials in the POP is used to reduce the sizes of the Lagrangian duals and their SOS relaxations. It is proved that the opt ..."
Abstract

Cited by 27 (18 self)
 Add to MetaCart
Sequences of generalized Lagrangian duals and their SOS (sums of squares of polynomials) relaxations for a POP (polynomial optimization problem) are introduced. Sparsity of polynomials in the POP is used to reduce the sizes of the Lagrangian duals and their SOS relaxations. It is proved that the optimal values of the Lagrangian duals in the sequence converge to the optimal value of the POP using a method from the penalty function approach. The sequence of SOS relaxations is transformed into a sequence of SDP (semidefinite program) relaxations of the POP, which correspond to duals of modification and generalization of SDP relaxations given by Lasserre for the POP.
Discretization and Localization in Successive Convex Relaxation Methods for Nonconvex Quadratic Optimization
, 2000
"... . Based on the authors' previous work which established theoretical foundations of two, conceptual, successive convex relaxation methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive SemiInfinite Linear Programming) Relaxation Method, this pa ..."
Abstract

Cited by 25 (14 self)
 Add to MetaCart
. Based on the authors' previous work which established theoretical foundations of two, conceptual, successive convex relaxation methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive SemiInfinite Linear Programming) Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems have a linear objective function c T x to be maximized over a nonconvex compact feasible region F described by a finite number of quadratic inequalities. We introduce two new techniques, "discretization" and "localization," into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an infinite number of semiinfinite SDPs (or semiinfinite LPs) which appeared at each iteration of the original methods by a finite number of standard SDPs (or standard LPs) with a finite number of linear inequality constraints. We establish: ffl Given any open convex ...
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
REFORMULATIONS IN MATHEMATICAL PROGRAMMING: DEFINITIONS AND SYSTEMATICS
, 2008
"... A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations c ..."
Abstract

Cited by 19 (14 self)
 Add to MetaCart
A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations can be carried out automatically. Reformulation techniques are very common in mathematical programming but interestingly they have never been studied under a common framework. This paper attempts to move some steps in this direction. We define a framework for storing and manipulating mathematical programming formulations, give several fundamental definitions categorizing reformulations in essentially four types (optreformulations, narrowings, relaxations and approximations). We establish some theoretical results and give reformulation examples for each type.
A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones
 Journal of Operations Research Society of Japan
, 2002
"... The class of POPs (polynomial optimization problems) over cones covers a wide range of optimization problems such as 01 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. This paper presents a new framework for convex relaxation of POPs over cones ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
The class of POPs (polynomial optimization problems) over cones covers a wide range of optimization problems such as 01 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. This paper presents a new framework for convex relaxation of POPs over cones in terms of linear optimization problems over cones. It provides a unified treatment of many existing convex relaxation methods based on the liftandproject linear programming procedure, the reformulationlinearization technique and the semidefinite programming relaxation for a variety of problems. It also extends the theory of convex relaxation methods, and thereby brings flexibility and richness in practical use of the theory.
Convex envelopes of multilinear functions over a unit hypercube and over special discrete sets
 ACTA MATHEMATICA VIETNAMICA
, 1997
"... In this paper, we present some general as well as explicit characterizations of the convex envelope of multilinear functions defined over a unit hypercube. A new approach is used to derive this characterization via a related convex hull representation obtained by applying the ReformulationLineariz ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
In this paper, we present some general as well as explicit characterizations of the convex envelope of multilinear functions defined over a unit hypercube. A new approach is used to derive this characterization via a related convex hull representation obtained by applying the ReformulationLinearization Technique (RLT) of Sherali and Adams (1990, 1994). For the special cases of multilinear functions having coefficients that are either all +1 or all −1, we develop explicit formulae for the corresponding convex envelopes. Extensions of these results are given for the case when the multilinear function is defined over discrete sets, including explicit formulae for the foregoing special cases when this discrete set is represented by generalized upper bounding (GUB) constraints in binary variables. For more general cases of multilinear functions, we also discuss how this construct can be used to generate suitable relaxations for solving nonconvex optimization problems that include such structures.
Efficient and safe global constraints for handling numerical constraint systems
 SIAM J. NUMER. ANAL
, 2005
"... Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver, a new branch and prune algorithm that combines Quad, local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with stateoftheart optimization solvers.