Results 1 
8 of
8
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 156 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
Proximal Minimization Methods with Generalized Bregman Functions
 SIAM JOURNAL ON CONTROL AND OPTIMIZATION
, 1995
"... We consider methods for minimizing a convex function f that generate a sequence fx k g by taking x k+1 to be an approximate minimizer of f(x) +D h (x; x k )=c k , where c k ? 0 and D h is the Dfunction of a Bregman function h. Extensions are made to Bfunctions that generalize Bregman func ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We consider methods for minimizing a convex function f that generate a sequence fx k g by taking x k+1 to be an approximate minimizer of f(x) +D h (x; x k )=c k , where c k ? 0 and D h is the Dfunction of a Bregman function h. Extensions are made to Bfunctions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs.
Penalty/Barrier Multiplier Methods for Convex Programming Problems
 SIAM Journal on Optimization
, 1995
"... We study a class of methods for solving convex programs, which are based on nonquadratic Augmented Lagrangians for which the penalty parameters are functions of the multipliers. This gives rise to lagrangians which are nonlinear in the multipliers. Each augmented lagrangian is specified by a choice ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
We study a class of methods for solving convex programs, which are based on nonquadratic Augmented Lagrangians for which the penalty parameters are functions of the multipliers. This gives rise to lagrangians which are nonlinear in the multipliers. Each augmented lagrangian is specified by a choice of a penalty function ' and a penaltyupdating function . The requirements on ' are mild, and allow for the inclusion of most of the previously suggested augmented lagrangians. More importantly, a new type of penalty/barrier function (having a logarithmic branch glued to a quadratic branch) is introduced and used to construct an efficient algorithm. Convergence of the algorithms is proved for the case of being a sublinear function of the dual multipliers. The algorithms are tested on largescale quadratically constrained problems arising in structural optimization.
LargeScale Nonlinear Constrained Optimization: A Current Survey
, 1994
"... . Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithms based upon trust regions and line searches. In addition, the importance of software, numerical linear algebra and testing will be addressed. We will try to explain why the difficulties arise, how attempts are being made to overcome them and some of the problems that still remain. Although there will be some emphasis on the LANCELOT and CUTE projects, the intention is to give a broad picture of the stateoftheart. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA 2 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 3 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England ...
Large Scale Unconstrained Optimization
 The State of the Art in Numerical Analysis
, 1996
"... This paper reviews advances in Newton, quasiNewton and conjugate gradient methods for large scale optimization. It also describes several packages developed during the last ten years, and illustrates their performance on some practical problems. Much attention is given to the concept of partial ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper reviews advances in Newton, quasiNewton and conjugate gradient methods for large scale optimization. It also describes several packages developed during the last ten years, and illustrates their performance on some practical problems. Much attention is given to the concept of partial separabilitywhich is gaining importance with the arrival of automatic differentiation tools and of optimization software that fully exploits its properties.
A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
 Math. of Computation
, 1997
"... Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combined in a Lagrangian barrier function. A sequence of such functions are approximately minimized within the domain defined by the simple bounds. Global convergence of the sequence of generated iterates to a firstorder stationary point for the original problem is established. Furthermore, possible numerical difficulties associated with barrier function methods are avoided as it is shown that a potentially troublesome penalty parameter is bounded away from zero. This paper is a companion to previous work of ours on augmented Lagrangian methods. 1.
Primaldual optimization methods in neural networks and support vector machines training
, 1999
"... Recently a lot of attention has been given to applications of mathematical programming to machine learning and neural networks. In this tutorial we investigate the use of Interior Point Methods (IPMs) to Support Vector Machines (SVMs) and Arti cial Neural Networks (ANNs) training. The training of AN ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Recently a lot of attention has been given to applications of mathematical programming to machine learning and neural networks. In this tutorial we investigate the use of Interior Point Methods (IPMs) to Support Vector Machines (SVMs) and Arti cial Neural Networks (ANNs) training. The training of ANNs is a highly nonconvex optimization problem in contrast to the SVMs training problem which isaconvex optimization problem. Speci cally, training a SVM is equivalent to solving a linearly constrained quadratic programming (QP) problem in a number of variables equal to the number of data points. This problem becomes quite challenging when the size of the data becomes of the order of some thousands. IPMs have beenshown quite promising for solving large scale linear and quadratic programming problems. We focus on primaldual IPMs applied to SVMs and neural networks and investigate the problem of reducing its computational complexity. We also develop a new class of incremental nonlinear primaldual techniques for arti cial neural training and provide preliminary experimental results for nancial forecasting problems.
Steplengths in Interior Point Algorithms of Quadratic Programming
"... An approach to determine primal and dual stepsizes in the infeasible interiorpoint primaldual method for convex quadratic problems is presented. The approach reduces the primal and dual infeasibilities in each step and allows different stepsizes. The method is derived by investigating the ..."
Abstract
 Add to MetaCart
An approach to determine primal and dual stepsizes in the infeasible interiorpoint primaldual method for convex quadratic problems is presented. The approach reduces the primal and dual infeasibilities in each step and allows different stepsizes. The method is derived by investigating the efficient set of a multiobjective optimization problem. Computational results are also given. Keywords: interior point methods, quadratic programming, steplength, efficient set 1 Introduction In the paper we will assume the convex quadratic problem (QP) in the form: min c T x + 1 2 x T Qx# subject to Ax = b# x 0# (1) This work was supported in part by EPSRC grant No. GR/J52655 and Hungarian ResearchFund OTKA T016413. y H1518 Budaspest, P.O. BOX63.Hungary 1 where A 2 R m\Thetan is of full row rank, Q 2 R n\Thetan is symmetric positive semidefinite and c# x 2 R n # b 2 R m . The dual of (1) in the Wolfe sense is defined as follows: max b T y ; 1 2 x T Qx# ...