Results 1  10
of
77
A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems: A Summary
 Research ReportRJ7493 (70008), IBM Almaden Research Center
, 1990
"... This note summarizes a report with the same title, where a study was carried out regarding a unified approach, proposed by Kojima, Mizuno and Yoshise, for interior point algorithms for the linear complementarily problem with a positive semidefinite matrix. This approach is extended to nonsymmetri ..."
Abstract

Cited by 146 (8 self)
 Add to MetaCart
This note summarizes a report with the same title, where a study was carried out regarding a unified approach, proposed by Kojima, Mizuno and Yoshise, for interior point algorithms for the linear complementarily problem with a positive semidefinite matrix. This approach is extended to nonsymmetric matrices with nonnegative principal minors.
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Indefinite Trust Region Subproblems And Nonsymmetric Eigenvalue Perturbations
, 1995
"... This paper extends the theory of trust region subproblems in two ways: (i) it allows indefinite inner products in the quadratic constraint and (ii) it uses a two sided (upper and lower bound) quadratic constraint. Characterizations of optimality are presented, which have no gap between necessity and ..."
Abstract

Cited by 58 (18 self)
 Add to MetaCart
This paper extends the theory of trust region subproblems in two ways: (i) it allows indefinite inner products in the quadratic constraint and (ii) it uses a two sided (upper and lower bound) quadratic constraint. Characterizations of optimality are presented, which have no gap between necessity and sufficiency. Conditions for the existence of solutions are given in terms of the definiteness of a matrix pencil. A simple dual program is intro...
An Implementation Of Karmarkar's Algorithm For Linear Programming
 Mathematical Programming
, 1986
"... . This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implement ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
. This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex code MINOS 4.0. 1. INTRODUCTION We describe in this paper a family of interior point power series affine scaling algorithms based on the linear programming algorithm presented by Karmarkar (1984). Two algorithms from this family, corresponding to first and second order pow...
A Subspace, Interior, and Conjugate Gradient Method for LargeScale BoundConstrained Minimization Problems
 SIAM Journal on Scientific Computing
, 1999
"... A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergenc ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its fullspace version.
OPTIMALITY, COMPUTATION, AND INTERPRETATION OF NONNEGATIVE MATRIX FACTORIZATIONS
 SIAM JOURNAL ON MATRIX ANALYSIS
, 2004
"... The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theor ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theoretical KuhnTucker optimality condition is described in explicit form. Secondly, a number of numerical techniques, old and new, are suggested for the nonnegative matrix factorization problems. Thirdly, the techniques are employed to two realworld applications to demonstrate the di#culty in interpreting the factorizations.
The Many Facets of Linear Programming
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction A ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949 on Activity Analysis of Production and Allocation [64]; the fortyfifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twentyfifth of the awarding of the 1975 Nobe...
An Implementation Of The Dual Affine Scaling Algorithm For Minimum Cost Flow On Bipartite Uncapacitated Networks
 SIAM Journal on Optimization
, 1993
"... . We describe an implementation of the dual affine scaling algorithm for linear programming specialized to solve minimum cost flow problems on bipartite uncapacitated networks. This implementation uses a preconditioned conjugate gradient algorithm to solve the system of linear equations that determi ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
. We describe an implementation of the dual affine scaling algorithm for linear programming specialized to solve minimum cost flow problems on bipartite uncapacitated networks. This implementation uses a preconditioned conjugate gradient algorithm to solve the system of linear equations that determines the search direction at each iteration of the interior point algorithm. Two preconditioners are considered: a diagonal preconditioner and a preconditioner based on the incidence matrix of an approximate maximum weighted spanning tree of the network. Under dual nondegeneracy, this spanning tree allows for early identification of the optimal solution. Applying an fflperturbation to the cost vector, an optimal extremepoint primal solution is produced in the presence of dual degeneracy. The implementation is tested by solving several large instances of randomly generated assignment problems, comparing solution times with the network simplex code netflo and the relaxation algorithm code re...