Results 1  10
of
10
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
Multiple Centrality Corrections in a PrimalDual Method for Linear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1995
"... A modification of the (infeasible) primaldual interior point method is developed. The method uses multiple corrections to improve the centrality of the current iterate. The maximum number of corrections the algorithm is encouraged to make depends on the ratio of the efforts to solve and to factoriz ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
A modification of the (infeasible) primaldual interior point method is developed. The method uses multiple corrections to improve the centrality of the current iterate. The maximum number of corrections the algorithm is encouraged to make depends on the ratio of the efforts to solve and to factorize the KKT systems. For any LP problem, this ratio is determined right after preprocessing the KKT system and prior to the optimization process. The harder the factorization, the more advantageous the higherorder corrections might prove to be. The computational performance of the method is studied on more difficult Netlib problems as well as on tougher and larger reallife LP models arising from applications. The use of multiple centrality corrections gives on the average a 25% to 40% reduction in the number of iterations compared with the widely used secondorder predictorcorrector method. This translates into 20% to 30% savings in CPU time.
Regularized Symmetric Indefinite Systems in Interior Point Methods for Linear and Quadratic Optimization
 Optimization Methods and Software
, 1998
"... This paper presents linear algebra techniques used in the implementation of an interior point method for solving linear programs and convex quadratic programs with linear constraints. New regularization techniques for Newton systems applicable to both symmetric positive definite and symmetric indefi ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
This paper presents linear algebra techniques used in the implementation of an interior point method for solving linear programs and convex quadratic programs with linear constraints. New regularization techniques for Newton systems applicable to both symmetric positive definite and symmetric indefinite systems are described. They transform the latter to quasidefinite systems known to be strongly factorizable to a form of Choleskylike factorization. Two different regularization techniques, primal and dual, are very well suited to the (infeasible) primaldual interior point algorithm. This particular algorithm, with an extension of multiple centrality correctors, is implemented in our solver HOPDM. Computational results are given to illustrate the potential advantages of the approach when applied to the solution of very large linear and convex quadratic programs. Keywords: Linear programming, convex quadratic programming, symmetric quasidefinite systems, primaldual regularization, pri...
Sensitivity Analysis in (Degenerate) Quadratic Programming
 DELFT UNIVERSITY OF TECHNOLOGY
, 1996
"... In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element (or an element of the linear part of the objective) is piecewise quadratic, where the pieces can be characterized by maximal complementary solutions and tripartitions. Further, we investigate differentiability of this function. A new algorithm to compute the optimal value function is proposed. Finally, we discuss the advantages of this approach when applied to meanvariance portfolio models.
The Optimal Set and Optimal Partition Approach to Linear and Quadratic Programming
 in Advances in Sensitivity Analysis and Parametric Programming
, 1996
"... In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (usin ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (using optimal bases) is shown. Moreover, we present an algorithm to compute the partitions, optimal sets and the optimal value function. This is a new algorithm and uses primal and dual optimal solutions. We also extend some of the results to parametric quadratic programming, and discuss differences and resemblances with the linear programming case.
Basis and Tripartition Identification for Quadratic Programming and Linear Complementarity Problems  From an interior solution to an optimal basis and viceversa
, 1996
"... Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexb ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexbased pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A tripartition identification algorithm is an algorithm which generates a maximal complementary solution (and its corresponding tripartition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal...
On Free Variables In Interior Point Methods
, 1997
"... this paper wehave selected the primaldual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorith ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper wehave selected the primaldual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorithm. It is to be noted, however, that this choice has notational consequences only. Practically,anyinterior point method, even nonlinear ones can be discussed in a similar linear algebra framework. Let us consider the linear programming problem
BPMPD user's manual Version 2.20
, 1997
"... The purpose of this document is to describe a software package, called BPMPD, which implements the infeasible primaldual interior point method for linear and quadratic programming problems. This manual describes how to prepare data to solve with the package, how to use BPMPD as a callable solver l ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The purpose of this document is to describe a software package, called BPMPD, which implements the infeasible primaldual interior point method for linear and quadratic programming problems. This manual describes how to prepare data to solve with the package, how to use BPMPD as a callable solver library and which algorithmic options can be specified by the user. 1 Problem formulation BPMPD is a software package to solve linear (LP) and convex quadratic (QP) problems. For simplicitywe will introduce here the quadratic programming problem which includes the linear programming as special case. Without loss of generality, the convex QP problem is usually assumed to be in the following form: min c T x + 1 2 x T Qx# subject to Ax = b# x 0 (1) where A 2 R m\Thetan of full row rank, Q 2 R n\Thetan symmetric positive semidefinite and c# x 2 R n #b2 R m . It is to be noted that an 1/2 explicit term is given in the quadratic matrix. The dual associated with this problem can be w...
Steplengths in Interior Point Algorithms of Quadratic Programming
"... An approach to determine primal and dual stepsizes in the infeasible interiorpoint primaldual method for convex quadratic problems is presented. The approach reduces the primal and dual infeasibilities in each step and allows different stepsizes. The method is derived by investigating the ..."
Abstract
 Add to MetaCart
An approach to determine primal and dual stepsizes in the infeasible interiorpoint primaldual method for convex quadratic problems is presented. The approach reduces the primal and dual infeasibilities in each step and allows different stepsizes. The method is derived by investigating the efficient set of a multiobjective optimization problem. Computational results are also given. Keywords: interior point methods, quadratic programming, steplength, efficient set 1 Introduction In the paper we will assume the convex quadratic problem (QP) in the form: min c T x + 1 2 x T Qx# subject to Ax = b# x 0# (1) This work was supported in part by EPSRC grant No. GR/J52655 and Hungarian ResearchFund OTKA T016413. y H1518 Budaspest, P.O. BOX63.Hungary 1 where A 2 R m\Thetan is of full row rank, Q 2 R n\Thetan is symmetric positive semidefinite and c# x 2 R n # b 2 R m . The dual of (1) in the Wolfe sense is defined as follows: max b T y ; 1 2 x T Qx# ...
INTERIORPOINT LINEAR PROGRAMMING SOLVERS
"... Abstract. We present an overview of available software for solving linear programming problems using interiorpoint methods. Some of the codes discussed include primal and dual simplex solvers as well, but we focus the discussion on the implementation of the interiorpoint solver. For each solver, w ..."
Abstract
 Add to MetaCart
Abstract. We present an overview of available software for solving linear programming problems using interiorpoint methods. Some of the codes discussed include primal and dual simplex solvers as well, but we focus the discussion on the implementation of the interiorpoint solver. For each solver, we present types of problems solved, available distribution modes, input formats and modeling languages, as well as algorithmic details, including problem formulation, use of higher corrections, presolve techniques, ordering heuristics for symbolic Cholesky factorization, and the specifics of numerical factorization. We present an overview of available software for solving linear programming problems using interiorpoint methods. We consider both opensource and proprietary