Results 1  10
of
63
An InteriorPoint Method for Semidefinite Programming
, 2005
"... We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other appli ..."
Abstract

Cited by 207 (17 self)
 Add to MetaCart
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as maxcut. Other applications include maxmin eigenvalue problems and relaxations for the stable set problem.
The Mathematics Of Eigenvalue Optimization
, 2003
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Abstract

Cited by 92 (13 self)
 Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briey on semide nite representability, and then outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a unifying narrative thread.
LargeScale Optimization of Eigenvalues
 SIAM J. Optimization
, 1991
"... Optimization problems involving eigenvalues arise in many applications. Let x be a vector of real parameters and let A(x) be a continuously differentiable symmetric matrix function of x. We consider a particular problem which occurs frequently: the minimization of the maximum eigenvalue of A(x), ..."
Abstract

Cited by 83 (4 self)
 Add to MetaCart
Optimization problems involving eigenvalues arise in many applications. Let x be a vector of real parameters and let A(x) be a continuously differentiable symmetric matrix function of x. We consider a particular problem which occurs frequently: the minimization of the maximum eigenvalue of A(x), subject to linear constraints and bounds on x. The eigenvalues of A(x) are not differentiable at points x where they coalesce, so the optimization problem is said to be nonsmooth. Furthermore, it is typically the case that the optimization objective tends to make eigenvalues coalesce at a solution point. There are three main purposes of the paper. The first is to present a clear and selfcontained derivation of the Clarke generalized gradient of the max eigenvalue function in terms of a "dual matrix". The second purpose is to describe a new algorithm, based on the ideas of a previous paper by the author (SIAM J. Matrix Anal. Appl. 9 (1988) 256268), which is suitable for solving l...
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Optimality Conditions and Duality Theory for Minimizing Sums of the Largest Eigenvalues of Symmetric Matrices
, 1993
"... This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information requi ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information required to either verify firstorder optimality conditions at a point or to generate a descent direction for the eigenvalue sum from that point, splitting a multiple eigenvalue if necessary. A model minimization algorithm is outlined, and connections with the classical literature on sums of eigenvalues are explained. Sums of the largest eigenvalues in absolute value are also addressed.
On Extending Some PrimalDual InteriorPoint Algorithms From Linear Programming to Semidefinite Programming
 SIAM Journal on Optimization
, 1998
"... This work concerns primaldual interiorpoint methods for semidefinite programming (SDP) that use a search direction originally proposed by HelmbergRendlVanderbeiWolkowicz [5] and KojimaShindohHara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these meth ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
This work concerns primaldual interiorpoint methods for semidefinite programming (SDP) that use a search direction originally proposed by HelmbergRendlVanderbeiWolkowicz [5] and KojimaShindohHara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [11] and also in [15] through different means and in different forms. In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variablemetric measures of centrality. These results provide convenient tools for deriving polynomiality results for primaldual algorithms extended from LP to SDP using the aforementioned and related search directions. We present examples...
Derivatives of Spectral Functions
, 1996
"... A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We give a simple proof that this spectral function is differentiable at X if and only if the function f is differentiable at the vector (X), and we give a concise formula for the derivative. We then apply this formula to deduce an analogous expression for the Clarke generalized gradient of the spectral function. A similar result holds for real symmetric matrices. 1 Introduction and notation Optimization problems involving a symmetric matrix variable, X say, frequently involve symmetric functions of the eigenvalues of X in the objective or constraints. Examples include the maximum eigenvalue of X, or log(det X) (for positive definite X), or eigenvalue constraints such as positive semidefinit...
On Extending PrimalDual InteriorPoint Algorithms from Linear Programming to Semidefinite Programming
, 1995
"... This work concerns primaldual interiorpoint methods for semidefinite programming (SDP) that use a linearized complementarity equation originally proposed by Kojima, Shindoh and Hara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of ba ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
This work concerns primaldual interiorpoint methods for semidefinite programming (SDP) that use a linearized complementarity equation originally proposed by Kojima, Shindoh and Hara [11], and recently rediscovered by Monteiro [15] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [11] and also in [15] through different means and in different forms. In this paper, we give a very short derivation of the key equalities and inequalities along the exact line used in linear programming (LP), producing basic relationships that have highly compact forms almost identical to their counterparts in LP. We also introduce a new definition of the central path and variablemetric measures of centrality. These results provide convenient tools for extending existing polynomiality results for many, if not most, algorithms from LP to SDP with little complication. We present examples of such extensions, including the longstep infeasible...
Convex analysis on the Hermitian matrices
 SIAM Journal on Optimization
, 1996
"... There is growing interest in optimization problems with real symmetric matrices as variables. Generally the matrix functions involved are spectral: they depend only on the eigenvalues of the matrix. It is known that convex spectral functions can be characterized exactly as symmetric convex functions ..."
Abstract

Cited by 45 (20 self)
 Add to MetaCart
There is growing interest in optimization problems with real symmetric matrices as variables. Generally the matrix functions involved are spectral: they depend only on the eigenvalues of the matrix. It is known that convex spectral functions can be characterized exactly as symmetric convex functions of the eigenvalues. A new approach to this characterization is given, via a simple Fenchel conjugacy formula. We then apply this formula to derive expressions for subdifferentials, and to study duality relationships for convex optimization problems with positive semidefinite matrices as variables. Analogous results hold for Hermitian matrices. Key Words: convexity, matrix function, Schur convexity, Fenchel duality, subdifferential, unitarily invariant, spectral function, positive semidefinite programming, quasiNewton update. AMS 1991 Subject Classification: Primary 15A45 49N15 Secondary 90C25 65K10 1 Introduction A matrix norm on the n \Theta n complex matrices is called unitarily inv...