Results 1  10
of
20
A Global Optimization Method, αBB, for General TwiceDifferentiable Constrained NLPs: I  Theoretical Advances
, 1997
"... In this paper, the deterministic global optimization algorithm, αBB, (αbased Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twicedifferentiable NLPs. The key idea is the constru ..."
Abstract

Cited by 52 (3 self)
 Add to MetaCart
In this paper, the deterministic global optimization algorithm, αBB, (αbased Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twicedifferentiable NLPs. The key idea is the construction of a converging sequence of upper and lower bounds on the global minimum through the convex relaxation of the original problem. This relaxation is obtained by (i) replacing all nonconvex terms of special structure (i.e., bilinear, trilinear, fractional, fractional trilinear, univariate concave) with customized tight convex lower bounding functions and (ii) by utilizing some α parameters as defined by Maranas and Floudas (1994b) to generate valid convex underestimators for nonconvex terms of generic structure. In most cases, the calculation of appropriate values for the α parameters is a challenging task. A number of approaches are proposed, which rigorously generate a set of α par...
A PrimalRelaxed Dual Global Optimization Approach
, 1993
"... A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide ..."
Abstract

Cited by 42 (19 self)
 Add to MetaCart
A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide valid upper and lower bounds respectively on the global optimum. Theoretical properties are presented which allow for a rigorous solution of the relaxed dual problem. Proofs of fflfinite convergence and fflglobal optimality are provided. The approach is shown to be particularly suited to (a) quadratic programming problems, (b) quadratically constrained problems, and (c) unconstrained and constrained optimization of polynomial and rational polynomial functions. The theoretical approach is illustrated through a few example problems. Finally, some further developments in the approach are briefly discussed.
A Branch and Cut Algorithm for Nonconvex Quadratically Constrained Quadratic Programming
, 1999
"... We present a branch and cut algorithm that yields in finite time, a globally ffloptimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
We present a branch and cut algorithm that yields in finite time, a globally ffloptimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using ReformulationLinearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported. Some problems of the literature are solved, for the first time with a proof of global optimality.
New Properties and Computational Improvement of the GOP Algorithm For Problems With Quadratic Objective Function and Constraints
 Journal of Global Optimization
, 1993
"... In Floudas and Visweswaran (1990, 1992), a deterministic global optimization approach was proposed for solving certain classes of nonconvex optimization problems. An algorithm, GOP, was presented for the solution of the problem through a series of primal and relaxed dual problems that provide valid ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
In Floudas and Visweswaran (1990, 1992), a deterministic global optimization approach was proposed for solving certain classes of nonconvex optimization problems. An algorithm, GOP, was presented for the solution of the problem through a series of primal and relaxed dual problems that provide valid upper and lower bounds respectively on the global solution. The algorithm was proved to have finite convergence to an fflglobal optimum. In this paper, new theoretical properties are presented that help to enhance the computational performance of the GOP algorithm applied to problems of special structure. The effect of the new properties is illustrated through application of the GOP algorithm to a difficult indefinite quadratic problem, a multiperiod tankage quality problem that occurs frequently in the modeling of refinery processes, and a set of pooling/blending problems from the literature. In addition, extensive computational experience is reported for randomly generated concave and in...
Global Optimization in Parameter Estimation of Nonlinear Algebraic Models via the ErrorInVariables Approach
 Ind. Eng. Chem. Res
, 1998
"... The estimation of parameters in nonlinear algebraic models through the errorinvariables method has been widely studied from a computational standpoint. The method involves the minimization of a weighted sum of squared errors subject to the model equations. Due to the nonlinear nature of the models ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
The estimation of parameters in nonlinear algebraic models through the errorinvariables method has been widely studied from a computational standpoint. The method involves the minimization of a weighted sum of squared errors subject to the model equations. Due to the nonlinear nature of the models used, the resulting formulation is nonconvex, and may contain several local minima in the region of interest. Current methods tailored for this formulation, although computationally efficient, can only attain convergence to a local solution. In this paper, a global optimization approach based on a branch and bound framework and convexification techniques for general twice differentiable nonlinear optimization problems is proposed for the parameter estimation of nonlinear algebraic models. The proposed convexification techniques exploit the mathematical properties of the formulation. Classical nonlinear estimation problems were solved and will be used to illustrate the various theoretical an...
Deterministic Global Optimization In Design, Control, And Computational Chemistry
 IMA Volumes in Mathematics and its Applications : Large Scale Optimization with Applications, Part II
, 1997
"... . This paper presents an overview of the deterministic global optimization approaches and their applications in the areas of Process Design, Control, and Computational Chemistry. The focus is on (i) decompositionbased primal dual methods, (ii) methods for generalized geometric programming problems, ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
. This paper presents an overview of the deterministic global optimization approaches and their applications in the areas of Process Design, Control, and Computational Chemistry. The focus is on (i) decompositionbased primal dual methods, (ii) methods for generalized geometric programming problems, and (iii) global optimization methods for general nonlinear programming problems. The classes of mathematical problems that are addressed range from indefinite quadratic programming to concave programs, to quadratically constrained problems, to polynomials, to general twice continuously differentiable nonlinear optimization problems. For the majority of the presented methods nondistributed global optimization approaches are discussed with the exception of decompositionbased methods where a distributed global optimization approach is presented. 1. Background. A significant effort has been expended in the last five decades toward theoretical and algorithmic studies of applications that arise...
MixedInteger Nonlinear Optimization in Process Synthesis
, 1998
"... The use of networks allows the representation of a variety of important engineering problems. The treatment of a particular class of network applications, the process synthesis problem, is exposed in this paper. Process Synthesis seeks to develop systematically process flowsheets that convert raw ma ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The use of networks allows the representation of a variety of important engineering problems. The treatment of a particular class of network applications, the process synthesis problem, is exposed in this paper. Process Synthesis seeks to develop systematically process flowsheets that convert raw materials into desired products. In recent years, the optimization approach to process synthesis has shown promise in tackling this challenge. It requires the development of a network of interconnected units, the process superstructure, that represents the alternative process flowsheets. The mathematical modeling of the superstructure has a mixed set of binary and continuous variables and results in a mixedinteger optimization model. Due to the nonlinearity of chemical models, these problems are generally classified as MixedInteger Nonlinear Programming (MINLP) problems. A number of local optimization algorithms, developed for the solution of this class of problems, are presented in this pap...
D.C. Optimization Approach to Robust Control: Feasibility Problems
 J. of Control
, 1997
"... . The feasibility problem for constant scaling in output feedback control is considered. This is an inherently difficult problem [20, 21] since the set of feasible solutions is nonconvex and may be disconnected. Nevertheless, we show that this problem can be reduced to the global maximization of a c ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
. The feasibility problem for constant scaling in output feedback control is considered. This is an inherently difficult problem [20, 21] since the set of feasible solutions is nonconvex and may be disconnected. Nevertheless, we show that this problem can be reduced to the global maximization of a concave function over a convex set, or alternatively, to the global minimization of a convex program with an additional reverse convex constraint. Thus this feasiblity problem belongs to the realm of d.c. optimization [14, 15, 32, 33], a new field which has recently emerged as an active promising research direction in nonconvex global optimization. By exploiting the specific d.c. structure of the problem, several algorithms are proposed which at every iteration require solving only either convex or linear subproblems. Analogous algorithms with new characterizations are proposed for the Bilinear Matrix Inequality (BMI) feasibility problem. 1 Introduction Consider the system given by Fig.1, ...
Distributed Decompositionbased Approaches in Global Optimization
 In Proceedings of State of the Art in Global Optimization: Computational Methods and Applications (Eds. C.A. Floudas and P.M. Pardalos), Kluwer Academic Series on Nonconvex Optimization and Its Applications
, 1996
"... . Recent advances in the theory of deterministic global optimization have resulted in the development of very efficient algorithmic procedures for identifying the global minimum of certain classes of nonconvex optimization problems. The adventof powerful multiprocessormachines combinedwith such deve ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
. Recent advances in the theory of deterministic global optimization have resulted in the development of very efficient algorithmic procedures for identifying the global minimum of certain classes of nonconvex optimization problems. The adventof powerful multiprocessormachines combinedwith such developmentsmake it possible to tackle with substantial efficiency otherwise intractable global optimization problems. In this paper, we will discuss implementation issues and computational results associated with the distributed implementation of the decompositionbased global optimization algorithm, GOP, [5], [6]. The NPcomplete character of the global optimization problem, translated into extremely high computational requirements, had made it difficult to address problems of large size.The parallel implementation made it possible to successfully tackle the increased computational requirements in in order to identify the global minimum in computationally realistic times. The key computationa...
Concavity Cuts for Disjoint Bilinear Programming
 MATHEMATICAL PROGRAMMING
, 2001
"... We pursue the study of concavity cuts for the disjoint bilinear programming problem. This optimization problem has two equivalent symmetric linear maxmin reformulations, leading to two sets of concavity cuts. We first examine the depth of these cuts by considering the assumptions on the boundedness ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We pursue the study of concavity cuts for the disjoint bilinear programming problem. This optimization problem has two equivalent symmetric linear maxmin reformulations, leading to two sets of concavity cuts. We first examine the depth of these cuts by considering the assumptions on the boundedness of the feasible regions of both maxmin and bilinear formulations. We next propose a branch and bound algorithm which make use of concavity cuts. We also present a procedure that eliminates degenerate solutions. Extensive computational experiences are reported. Sparse problems with up to 500 variables in each disjoint sets and 100 constraints, and dense problems with up to 60 variables again in each sets and 60 constraints are solved in reasonable computing times.