Results 1 
8 of
8
A New Bound for the Quadratic Assignment Problem Based on Convex Quadratic Programming
 MATHEMATICAL PROGRAMMING
, 1999
"... We describe a new convex quadratic programming bound for the quadratic assignment problem (QAP). The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP. The new bound dominates the wellknown projected eigenvalue bound, and appears to be comp ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
(Show Context)
We describe a new convex quadratic programming bound for the quadratic assignment problem (QAP). The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP. The new bound dominates the wellknown projected eigenvalue bound, and appears to be competitive with existing bounds in the tradeoff between bound quality and computational effort.
Why a pure primal Newton barrier step may be infeasible
 SIAM J. Optim
, 1995
"... ..."
(Show Context)
Solving reduced KKT systems in barrier methods for linear and quadratic programming
, 1991
"... In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH−1ATq = s, where H is a large principal submatrix of K. ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
In barrier methods for constrained optimization, the main work lies in solving large linear systems Kp = r, where K is symmetric and indefinite. For linear programs, these KKT systems are usually reduced to smaller positivedefinite systems AH−1ATq = s, where H is a large principal submatrix of K. These systems can be solved more efficiently, but AH−1AT is typically more illconditioned than K. In order to improve the numerical properties of barrier implementations, we discuss the use of “reduced KKT systems”, whose dimension and condition lie somewhere in between those of K and AH−1AT. The approach applies to linear programs and to positive semidefinite quadratic programs whose Hessian H is at least partially diagonal. We have implemented reduced KKT systems in a primaldual algorithm for linear programming, based on the sparse indefinite solver MA27 from the Harwell Subroutine Library. Some features of the algorithm are presented, along with results on the netlib LP test set.
The Optimal Set and Optimal Partition Approach to Linear and Quadratic Programming
 in Advances in Sensitivity Analysis and Parametric Programming
, 1996
"... In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (usin ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (using optimal bases) is shown. Moreover, we present an algorithm to compute the partitions, optimal sets and the optimal value function. This is a new algorithm and uses primal and dual optimal solutions. We also extend some of the results to parametric quadratic programming, and discuss differences and resemblances with the linear programming case.
Basis and Tripartition Identification for Quadratic Programming and Linear Complementarity Problems  From an interior solution to an optimal basis and viceversa
, 1996
"... Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexb ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexbased pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A tripartition identification algorithm is an algorithm which generates a maximal complementary solution (and its corresponding tripartition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal...
Computing Maximum Likelihood Estimators of Convex Density Functions
, 1995
"... We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a density function that is known in advance to be convex. The maximum likelihood estimator is then the solution of a linearly constrained convex minimization problem. This problem turns out to be numerically difficult. We show that interior point algorithms perform well on this class of optimization problems, though for large samples, numerical difficulties are still encountered. To eliminate those difficulties, we propose a clustering scheme that is reasonable from a statistical point of view. We display results for problems with up to 40000 observations. We also give a typical picture of the estimated density: a piece wise linear function, with very few pieces only. Key words: interiorpoint method, convex estimation, maximum likelihood estimation, logarithmicbarrier method, primaldual method. iv 1 Introduction Finding a good statistical estimator can often be formulated as an unconstrained optimization problem whose objective func...
An InteriorPoint Method for General LargeScale Quadratic Programming Problems
 Annals of Operations Research
, 1996
"... In this paper we present an interior point algorithm for solving both convex and nonconvex quadratic programs. The method, which is an extension of our interior point work on linear programming problems, efficiently solves a wide class of large scale problems and forms the basis for a sequential qua ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we present an interior point algorithm for solving both convex and nonconvex quadratic programs. The method, which is an extension of our interior point work on linear programming problems, efficiently solves a wide class of large scale problems and forms the basis for a sequential quadratic programming (SQP) solver for general large scale nonlinear programs. The key to the algorithm is a 3dimensional costimprovement subproblem, which is solved at every iteration. We have developed an approximate recentering procedure and a novel, adaptive bigM Phase I procedure that are essential to the success. We describe the basic method along with the recentering and bigM Phase I procedures. Details of the implementation and computational results are also presented. Keywords: bigM Phase I procedure, convex quadratic programming, interior point methods, linear programming, method of centers, multidirectional search direction, nonconvex quadratic programming, recentering. # Cont...
RICE UNIVERSITY The Use of Optimization Techniques in the Solution of Partial Differential Equations from
, 1996
"... Acknowledgments This thesis is a very important milestone in a journey I began more than ten years ago. People too numerous to mention have helped me along the way; a few are singled out here. When I was an undergraduate at the University of Maryland, Baltimore County, the Mathematics faculty, in pa ..."
Abstract
 Add to MetaCart
(Show Context)
Acknowledgments This thesis is a very important milestone in a journey I began more than ten years ago. People too numerous to mention have helped me along the way; a few are singled out here. When I was an undergraduate at the University of Maryland, Baltimore County, the Mathematics faculty, in particular Professors James Greenberg, So/ren Jensen, and Marc Teboulle, taught me to love applied mathematics; their patience with me was endless and I will always be grateful to them.