Results 1  10
of
10
An InteriorPoint Method for Semidefinite Programming
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other appli ..."
Abstract

Cited by 207 (17 self)
 Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other applications include maxmin eigenvalue problems and relaxations for the stable set problem.
Semidefinite Programming Relaxations For The Quadratic Assignment Problem
, 1998
"... Semidefinite programming (SDP) relaxations for the quadratic assignment problem (QAP) are derived using the dual of the (homogenized) Lagrangian dual of appropriate equivalent representations of QAP. These relaxations result in the interesting, special, case where only the dual problem of the SDP re ..."
Abstract

Cited by 72 (25 self)
 Add to MetaCart
Semidefinite programming (SDP) relaxations for the quadratic assignment problem (QAP) are derived using the dual of the (homogenized) Lagrangian dual of appropriate equivalent representations of QAP. These relaxations result in the interesting, special, case where only the dual problem of the SDP relaxation has strict interior, i.e. the Slater constraint qualification always fails for the primal problem. Although there is no duality gap in theory, this indicates that the relaxation cannot be solved in a numerically stable way. By exploring the geometrical structure of the relaxation, we are able to find projected SDP relaxations. These new relaxations, and their duals, satisfy the Slater constraint qualification, and so can be solved numerically using primaldual interiorpoint methods. For one of our models, a preconditioned conjugate gradient method is used for solving the large linear systems which arise when finding the Newton direction. The preconditioner is found by exploiting th...
Solving LargeScale Linear Programs by InteriorPoint Methods Under the MATLAB Environment
 Optimization Methods and Software
, 1996
"... In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of M ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparsematrix functions and external interface facilities, and of existing Fortran sparse Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving largescale problems. In addition, we discuss in detail a technique for overcoming numerical instability in Cholesky factorization at the endstage of iterations in interiorpoint algorithms. Keywords: Linear programming, PrimalDual infeasibleinteriorp...
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
SQP methods for largescale nonlinear programming
, 1999
"... We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic progr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic programming methods.
Mathematical Models for Transportation Demand Analysis
, 1996
"... this paper, we will concentrate on the overspeci#cation arising from the ASCs and will not consider other possible errors sources. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
this paper, we will concentrate on the overspeci#cation arising from the ASCs and will not consider other possible errors sources.
Numerical Methods for LargeScale NonConvex Quadratic Programming
, 2001
"... We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a goo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider numerical methods for finding (weak) secondorder critical points for largescale nonconvex quadratic programming problems. We describe two new methods. The first is of the activeset variety. Although convergent from any starting point, it is intended primarily for the case where a good estimate of the optimal active set can be predicted. The second is an interiorpoint trustregion type, and has proved capable of solving problems involving up to half a million unknowns and constraints. The solution of a key equality constrained subproblem, common to both methods, is described. The results of comparative tests on a large set of convex and nonconvex quadratic programming examples are given.
A Class of Trust Region Methods for Nonlinear Network Optimization Problems
, 1993
"... . We describe the results of a series of tests upon a class of new methods of trust region type for solving the nonlinear network optimization problem. The trust region technique considered is characterized by the use of the infinity norm and of inexact projections on the network constraints. The re ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. We describe the results of a series of tests upon a class of new methods of trust region type for solving the nonlinear network optimization problem. The trust region technique considered is characterized by the use of the infinity norm and of inexact projections on the network constraints. The results are encouraging and show that this approach is particularly useful in solving largescale nonlinear network optimization problems, especially when many bound constraints are expected to be active at the solution. Key Words. Nonlinear optimization, nonlinear network optimization, trust region methods, truncated Newton methods, numerical results 1. Introduction. We consider the problem: min x2R n f(x) subject to Ax = b l x u; (1.1) where f : R n ! R is a twice continuously differentiable partially separable function, A is a m \Theta n nodearc incidence matrix, b 2 R n and satisfies P m i=1 b i = 0, and l and u 2 R n . Many algorithms for solving the nonlinear network p...
Advances in Interior Point Methods for LargeScale Linear Programming
, 2007
"... This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some pract ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some practical algorithms. The symmetric neighbourhood adds explicit upper bounds on the complementarity
An Asymptotical O(...)iteration Pathfollowing Linear Programming Algorithm That Uses Wide Neighborhoods
, 1994
"... Pathfollowing linear programming (LP) algorithms generate a sequence of points within certain neighborhoods of a centralpath C, which prevent iterates from prematurely getting too close to the boundary of the feasible region. Depending on their norm used, these neighborhoods include N2(fi), N1(fi) ..."
Abstract
 Add to MetaCart
Pathfollowing linear programming (LP) algorithms generate a sequence of points within certain neighborhoods of a centralpath C, which prevent iterates from prematurely getting too close to the boundary of the feasible region. Depending on their norm used, these neighborhoods include N2(fi), N1(fi) and N \Gamma 1(fi), where fi 2 (0; 1), and C ae N2(fi) ae N1(fi) ae N \Gamma 1(fi) for each fi 2 (0; 1): A paradox is that among all existing (infeasible or feasible) pathfollowing algorithms, the theoretical iteration complexity, O(pnL), of smallneighborhood (N2) algorithms is significantly better than the complexity, O(nL), of wideneighborhood (N \Gamma 1) algorithms, while in practice wideneighborhood algorithms outperform smallneighborhood ones by a big margin. Here, n is the number of LP variables and L is the LP data length. In this paper, we present an O(n n+1 2n L)iteration (infeasible) primaldual highorder algorithm that uses wide neighborhoods. Note that this iteration bound is asymptotical O(pnL), i.e., the best bound for smallneighborhood algorithms, as n increases.