Results 1 
9 of
9
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
User's Guide for CFSQP Version 2.5: A C Code for Solving (Large Scale) Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality Constraints
, 1997
"... CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If the initi ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, CFSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by CFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. When solving problems with many sequentially related constraints (or objectives), such as discretized semiinfinite programming (SIP) problems, CFSQP gives the user the option to use an algo...
Theory and implementation of numerical methods based on RungeKutta integration for solving optimal control problems
, 1996
"... ..."
User's Guide for FFSQP Version 3.7: A FORTRAN Code for Solving Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality and Linear Constraints
, 1997
"... FFSQP is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If t ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
FFSQP is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, FFSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by FFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. The user has the option of either requiring that the (modified) objective function decrease at each iteration after feasibility for nonlinear inequality and linear constraints has b...
Tits, A Simple primaldual feasible interiorpoint method for nonlinear programming with monotone descent
 Computational Optimization and Applications
, 2003
"... We propose and analyze a primaldual interior point method of the “feasible ” type, with the additional property that the objective function decreases at each iteration. A distinctive feature of the method is the use of different barrier parameter values for each constraint, with the purpose of bett ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We propose and analyze a primaldual interior point method of the “feasible ” type, with the additional property that the objective function decreases at each iteration. A distinctive feature of the method is the use of different barrier parameter values for each constraint, with the purpose of better steering the constructed sequence away from nonKKT stationary points. Assets of the proposed scheme include relative simplicity of the algorithm and of the convergence analysis, strong global and local convergence properties, and good performance in preliminary tests. In addition, the initial point is allowed to lie on the boundary of the feasible set.
A quasiNewton penalty barrier method for convex minimization problems
, 2002
"... We describe an infeasible interior point algorithm for convex minimization problems. The method uses quasiNewton techniques for approximating the second derivatives and providing superlinear convergence. We propose a new feasibility control of the iterates by introducing shift variables and by pena ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We describe an infeasible interior point algorithm for convex minimization problems. The method uses quasiNewton techniques for approximating the second derivatives and providing superlinear convergence. We propose a new feasibility control of the iterates by introducing shift variables and by penalizing them in the barrier problem. We prove global convergence under standard conditions on the problem data, without any assumption on the behavior of the algorithm.
InteriorPoint l_2Penalty Methods for Nonlinear Programming with Strong Global Convergence Properties
 Math. Programming
, 2004
"... We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the n ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the need to drive the corresponding penalty parameter to infinity when finite multipliers exist. Instead of directly decreasing an equality constraint infeasibility measure, these methods attain feasibility by forcing this measure to zero whenever the steps generated by the methods tend to zero. Our analysis shows that under standard assumptions, our methods have strong global convergence properties. Specifically, we show that if the penalty parameter remains bounded, any limit point of the iterate sequence is either a KKT point of the barrier subproblem, or a FritzJohn (FJ) point of the original problem that fails to satisfy the MangasarianFromovitz constraint qualification (MFCQ); if the penalty parameter tends to infinity, there is a limit point that is either an infeasible FJ point of the inequality constrained feasibility problem (an infeasible stationary point of the infeasibility measure if slack variables are added) or a FJ point of the original problem at which the MFCQ fails to hold. Numerical results are given that illustrate these outcomes.
Interactive design optimization of architectural layouts. Engineering Optimization (this issue
, 2002
"... Many areas of design involve both quantifiable and subjective goals, preferences, and constraints. Aesthetic and other subjective aspects of design are typically ignored in optimization models because they are difficult to model with mathematics; however, they are extremely important in areas such a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Many areas of design involve both quantifiable and subjective goals, preferences, and constraints. Aesthetic and other subjective aspects of design are typically ignored in optimization models because they are difficult to model with mathematics; however, they are extremely important in areas such as product design and architectural design. This article presents an interactive method for integrating mathematical optimization with human decisionmaking during conceptual design of architectural floorplan layouts. The optimization models and algorithms were presented in a previous article. Here, an objectoriented representation allows the designer to interact with physically relevant building objects during optimization. The designer’s interaction causes the program to dynamically change the optimization representation onthefly by adding, deleting, and modifying objectives, constraints, and structural units. This work presents mathematical optimization as a tool to assist the designer in refining illdefined design problems during the early conceptual design phase. The designer can quickly explore design alternatives visually and computationally by taking advantage of computational algorithms to maintain feasibility and compute efficient solutions.
Proceedings of the Federated Conference on Computer Science and Information Systems pp. 477–484
"... A modified multipoint shooting feasibleSQP method for optimal control of DAE systems ..."
Abstract
 Add to MetaCart
A modified multipoint shooting feasibleSQP method for optimal control of DAE systems