Results 11  20
of
55
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
A Class of Preconditioners for Weighted Least Squares Problems
, 1999
"... We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a Cholesky factorization of a weighted normal equation coefficient matrix with the previous weight. Key Words. Weighted linear least squares, Preconditioners, Preconditioned conjugate gradient for least squares, Linear programming, Primaldual infeasibleinteriorpoint algorithms. 1 Introduction In this paper, we present a class of preconditioners based on low rank corrections to the Cholesky factorization of a weighted normal equation coefficient matrix. This class of preconditioners leads to good performance for interiorpoint methods for linear programming. Particularly, we have implemented primaldual Newton method to test this class of preconditioners. The numerical results on large scale...
Structure Exploiting Tool in Algebraic Modeling Languages
, 1998
"... A new concept is proposed for linking algebraic modeling language and the structure exploiting solver. SPI (Structure Passing Interface) is a program that enables retrieving structure from the anonymous mathematical program built by the algebraic modeling language. SPI passes the special structure o ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
A new concept is proposed for linking algebraic modeling language and the structure exploiting solver. SPI (Structure Passing Interface) is a program that enables retrieving structure from the anonymous mathematical program built by the algebraic modeling language. SPI passes the special structure of the problem to a SES (Structure Exploiting Solver). An integration of SPI and SES leads to SET (Structure Exploiting Tool) and can be used with any algebraic modeling language. Key words. Algebraic modeling language, large scale optimization, structure exploiting solver. 1 Introduction Practitioners who use mathematical programming are confronted with a dilemma. On the one hand, their problems are usually so large and so complex that they cannot be modeled without the aid of an algebraic modeling language. On the other hand, large models often necessitate the use of a specialized structure exploiting solver. Unfortunately, algebraic modeling languages only access general purpose This r...
Exploiting Structure in Parallel Implementation of Interior Point Methods for Optimization
 School of Mathematics, University of Edinburgh, Edinburgh
, 2004
"... ..."
INTERIOR POINT METHODS FOR COMBINATORIAL OPTIMIZATION
, 1995
"... Research on using interior point algorithms to solve combinatorial optimization and integer programming problems is surveyed. This paper discusses branch and cut methods for integer programming problems, a potential reduction method based on transforming an integer programming problem to an equivale ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Research on using interior point algorithms to solve combinatorial optimization and integer programming problems is surveyed. This paper discusses branch and cut methods for integer programming problems, a potential reduction method based on transforming an integer programming problem to an equivalent nonconvex quadratic programming problem, interior point methods for solving network flow problems, and methods for solving multicommodity flow problems, including an interior point column generation algorithm.
Interior Point Methods: Current Status And Future Directions
, 1997
"... This article provides a synopsis of the major developments in interior point methods for mathematical programming in the last thirteen years, and discusses current and future research directions in interior point methods, with a brief selective guide to the research literature. AMS Subject Classific ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This article provides a synopsis of the major developments in interior point methods for mathematical programming in the last thirteen years, and discusses current and future research directions in interior point methods, with a brief selective guide to the research literature. AMS Subject Classification: 90C, 90C05, 90C60 Keywords: Linear Programming, Newton's Method, Interior Point Methods, Barrier Method, Semidefinite Programming, SelfConcordance, Convex Programming, Condition Numbers 1 An earlier version of this article has previously appeared in OPTIMA  Mathematical Programming Society Newsletter No. 51, 1996 2 M.I.T. Sloan School of Management, Building E40149A, Cambridge, MA 02139, USA. email: rfreund@mit.edu 3 The Institute of Statistical Mathematics, 467 MinamiAzabu, Minatoku, Tokyo 106 JAPAN. email: mizuno@ism.ac.jp INTERIOR POINT METHODS 1 1 Introduction and Synopsis The purpose of this article is twofold: to provide a synopsis of the major developments in ...
Inexact Constraint Preconditioners for Linear Systems Arising in Interior Point Methods
, 2005
"... Abstract. Issues of indefinite preconditioning of reduced Newton systems arising in optimization with interior point methods are addressed in this paper. Constraint preconditioners have shown much promise in this context. However, there are situations in which an unfavorable sparsity pattern of Jaco ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
Abstract. Issues of indefinite preconditioning of reduced Newton systems arising in optimization with interior point methods are addressed in this paper. Constraint preconditioners have shown much promise in this context. However, there are situations in which an unfavorable sparsity pattern of Jacobian matrix may adversely affect the preconditioner and make its inverse representation unacceptably dense hence too expensive to be used in practice. A remedy to such situations is proposed in this paper. An approximate constraint preconditioner is considered in which sparse approximation of the Jacobian is used instead of the complete matrix. Spectral analysis of the preconditioned matrix is performed and bounds on its nonunit eigenvalues are provided. Preliminary computational results are encouraging. Keywords Interiorpoint methods, Iterative solvers, Preconditioners, Approximate Jacobian.
HYPERGRAPH PARTITIONINGBASED FILLREDUCING ORDERING
, 2009
"... A typical first step of a direct solver for linear system Mx = b is reordering of symmetric matrix M to improve execution time and space requirements of the solution process. In this work, we propose a novel nesteddissectionbased ordering approach that utilizes hypergraph partitioning. Our approac ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
A typical first step of a direct solver for linear system Mx = b is reordering of symmetric matrix M to improve execution time and space requirements of the solution process. In this work, we propose a novel nesteddissectionbased ordering approach that utilizes hypergraph partitioning. Our approach is based on formulation of graph partitioning by vertex separator (GPVS) problem as a hypergraph partitioning problem. This new formulation is immune to deficiency of GPVS in a multilevel framework hence enables better orderings. In matrix terms, our method relies on the existence of a structural factorization of the input M matrix in the form of M = AAT (or M = AD2AT). We show that the partitioning of the rownet hypergraph representation of rectangular matrix A induces a GPVS of the standard graph representation of matrix M. In the absence of such factorization, we also propose simple, yet effective structural factorization techniques that are based on finding an edge clique cover of the standard graph representation of matrix M, and hence applicable to any arbitrary symmetric matrix M. Our experimental evaluation has shown that the proposed method achieves better ordering in comparison to stateoftheart graphbased ordering tools even for symmetric matrices where structural M = AAT factorization is not provided as an input. For matrices coming from linear programming problems, our method enables even faster and better orderings.
Column Generation with a PrimalDual Method
, 1997
"... A simple column generation scheme that employs an interior point method to solve underlying restricted master problems is presented. In contrast with the classical column generation approach where restricted master problems are solved exactly, the method presented in this paper consists in solving i ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
A simple column generation scheme that employs an interior point method to solve underlying restricted master problems is presented. In contrast with the classical column generation approach where restricted master problems are solved exactly, the method presented in this paper consists in solving it to a predetermined optimality tolerance (loose at the beginning and appropriately tightened when the optimum is approached). An infeasible primaldual interior point method which employs the notion of ¯center to control the distance to optimality is used to solve the restricted master problem. Similarly to the analytic center cutting plane method, the present approach takes full advantage of the use of central prices. Furthermore, it offers more freedom in the choice of optimization strategy as it adaptively adjusts the required optimality tolerance in the master to the observed rate of convergence of the column generation process. The proposed method has been implemented and used to solv...
Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming
, 2009
"... We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satis ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satisfying ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1, E [log C(A, b, c)] = O(log(nd/σ)), A,b,c where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2 and C(A, b, c) is the condition number of the linear program defined by (A, b, c). From this bound, we obtain a smoothed analysis of interior point algorithms. By combining this with the smoothed analysis of finite termination of Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of interior point algorithms for linear programming is O(n 3 log(nd/σ)).