Results 1  10
of
10
An InteriorPoint Method for Semidefinite Programming
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other appli ..."
Abstract

Cited by 207 (17 self)
 Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other applications include maxmin eigenvalue problems and relaxations for the stable set problem.
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
An Interior Point Potential Reduction Method for Constrained Equations
, 1995
"... We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In gen ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In general, constrained equations provide a unified formulation for many mathematical programming problems, including complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities and nonlinear programs. Combining ideas from the damped Newton and interior point methods, we present an iterative algorithm for solving a constrained system of equations and investigate its convergence properties. Specialization of the algorithm and its convergence analysis to complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities are discussed in detail. We also report the computational results of the implementation of the algo...
The Synchronization Problem
 in Protocol Testing and its Complexity”, Inf. Proc. Letters, Vol.40
, 1991
"... primaldual potential reduction method for ..."
InteriorPoint Methodology for Linear Programming: Duality, Sensitivity Analysis and Computational Aspects
 IN OPTIMIZATION IN PLANNING AND OPERATION OF ELECTRIC POWER SYSTEMS
, 1993
"... In this paper we use the interior point methodology to cover the main issues in linear programming: duality theory, parametric and sensitivity analysis, and algorithmic and computational aspects. The aim is to provide a global view on the subject matter. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we use the interior point methodology to cover the main issues in linear programming: duality theory, parametric and sensitivity analysis, and algorithmic and computational aspects. The aim is to provide a global view on the subject matter.
Computing Maximum Likelihood Estimators of Convex Density Functions
, 1995
"... We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms perform well on this class of optimization problems, though for large samples, numerical difficulties are still encountered. To eliminate those difficulties, we propose a clustering scheme that is reasonable from a statistical point of view. We display results for problems with up to 40000 observations. We also give a typical picture of the estimated density: a piece wise linear function, with very few pieces only. Key words: interiorpoint method, convex estimation, maximum likelihood estimation, logarithmicbarrier method, primaldual method. iv 1 Introduction Finding a good statistical estimator can often be formulated as an unconstrained optimization problem whose objective func...
Design Issues in Algorithms for Large Scale Nonlinear Programming
, 1999
"... Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly d ..."
Abstract
 Add to MetaCart
Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly decreasing barrier parameters are introduced. Preconditioning or scaling techniques are developed. To extend the range of applicability of interior methods, feasible variants are proposed, quasiNewton updating methods are introduced, and strategies for the nonmonotone decrease of the merit function are explored. A product of this dissertation is a software, called NITRO, that implements an interior point method using the new algorithmic features presented in this dissertation. The performance of this code is assessed by comparing it with established software for large scale optimization (SNOPT, filterSQP and LOQO). ii ACKNOWLEDGMENT I am grateful to Jorge Nocedal, my PhD supervisor, for i...
A Globally Convergent Interior Point Algorithm for General NonLinear Programming Problems
, 1997
"... This paper presents a primaldual interior point algorithm for solving general constrained nonlinear programming problems. The initial problem is transformed to an equivalent equality constrained problem, with inequality constraints incorporated into the objective function by means of a logarithmic ..."
Abstract
 Add to MetaCart
This paper presents a primaldual interior point algorithm for solving general constrained nonlinear programming problems. The initial problem is transformed to an equivalent equality constrained problem, with inequality constraints incorporated into the objective function by means of a logarithmic barrier function. Satisfaction of the equality constraints is enforced through the incorporation of an adaptive quadratic penalty function into the objective. The penalty parameter is determined using a strategy that ensures a descent property for a merit function. It is shown that the adaptive penalty does not grow indefinitely. The algorithm applies Newton's method to solve the first order optimality conditions of the equivalent equality problem. Global convergence of the algorithm is achieved through the monotonic decrease of a merit function. Locally the algorithm is shown to be quadratically convergent. Key Words: Nonlinear Programming, primaldual interior point methods, adaptive pen...