Results 1  10
of
16
An InteriorPoint Method for Semidefinite Programming
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other appli ..."
Abstract

Cited by 207 (17 self)
 Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other applications include maxmin eigenvalue problems and relaxations for the stable set problem.
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 103 (17 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
An interior point algorithm for large scale nonlinear programming
 SIAM Journal on Optimization
, 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming, trust region method.
On a Homogeneous Algorithm for the Monotone Complementarity Problem
 Mathematical Programming
, 1995
"... We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and compleme ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We present a generalization of a homogeneous selfdual linear programming (LP) algorithm to solving the monotone complementarity problem (MCP). The algorithm does not need to use any "bigM" parameter or twophase method, and it generates either a solution converging towards feasibility and complementarity simultaneously or a certificate proving infeasibility. Moreover, if the MCP is polynomially solvable with an interior feasible starting point, then it can be polynomially solved without using or knowing such information at all. To our knowledge, this is the first interiorpoint and infeasiblestarting algorithm for solving the MCP that possesses these desired features. Preliminary computational results are presented. Key words: Monotone complementarity problem, homogeneous and selfdual, infeasiblestarting algorithm. Running head: A homogeneous algorithm for MCP. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark, email: eda@busieco.ou.dk. y De...
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
A Computational Study of the Homogeneous Algorithm for LargeScale Convex Optimization
, 1997
"... Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of th ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex optimization problems that also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for largescale sparse convex optimization. Moreover, we present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and variables. The results indicate that the proposed algorithm is also practically efficient. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark. Email: eda@busieco.ou.dk y ...
A Reduced Space Interior Point Strategy for Optimization of Differential Algebraic Systems
 Computers & Chemical Engineering
, 1999
"... A novel nonlinear programming (NLP) strategy is developed and applied to the optimization of differential algebraic equation (DAE) systems. Such problems, also referred to as dynamic optimization problems, are common in chemical process engineering and remain challenging applications of nonlinear pr ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
A novel nonlinear programming (NLP) strategy is developed and applied to the optimization of differential algebraic equation (DAE) systems. Such problems, also referred to as dynamic optimization problems, are common in chemical process engineering and remain challenging applications of nonlinear programming. These applications often consist of large, complex nonlinear models that result from discretizations of DAEs. Variables in the NLP model include state and control variables, with far fewer control variables than states. Moreover, all of these discretized variables have associated upper and lower bounds which can be potentially active. To deal with this large, highly constrained problem, an interior point NLP strategy is developed. Here a log barrier function is used to deal with the large number of bound constraints in order to transform the problem to an equality constrained NLP. A modified Newton method is then applied directly to this problem. In addition, this method uses an efficient decomposition of the discretized DAEs and the solution of the Newton step is performed in the reduced space of the independent variables. The resulting approach exploits many of the features of the DAE system and is performed element by element in a forward manner. Several large dynamic process optimization problems are considered to demonstrate the effectiveness of this approach; these include complex separation and reaction processes (including reactive distillation) with several hundred DAEs. NLP formulations with over 55,000 variables are considered. These problems are solved in 5 to 12 CPU minutes on small workstations. Key words: interior point; dynamic optimization; nonlinear programming 1 1
An Inexact TrustRegion FeasiblePoint Algorithm for Nonlinear Systems of Equalities and Inequalities
 Department of Computational and Applied Mathematics, Rice University
, 1995
"... In this work we define a trustregion feasiblepoint algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m → R^p is continuously differentiable. This formulation is quite general; the KarushKuhnTucker condi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this work we define a trustregion feasiblepoint algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m → R^p is continuously differentiable. This formulation is quite general; the KarushKuhnTucker conditions of a general nonlinear programming problem are an obvious example, and a set of equalities and inequalities can be transformed, using slack variables, into such form. We will be concerned with the possibility that n, m, and p may be large and that the Jacobian matrix may be sparse and rank deficient. Exploiting the convex structure of the local model trustregion subproblem, we propose a globally convergent inexact trustregion feasiblepoint algorithm to minimize an arbitrary norm of the residual, say F(x, y)a, subject to the nonnegativity constraints. This algorithm uses a trustregion globalization strategy to determine a descent direction as an inexact solution of the local model trustregion subproblem and then, it uses linesearch techniques to obtain an acceptable steplength. We demonstrate that, under rather weak hypotheses, any accumulation point of the iteration sequence is a constrained stationary point for f = Fa, and that the sequence of constrained residuals converges to zero.