Results 1 
7 of
7
The Mathematics Of Eigenvalue Optimization
, 2003
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Abstract

Cited by 92 (13 self)
 Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briey on semide nite representability, and then outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a unifying narrative thread.
Polynomial interior point cutting plane methods
 Optimization Methods and Software
, 2003
"... Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approxim ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the relaxation. Typically, these cutting plane methods can be developed so as to exhibit polynomial convergence. The volumetric cutting plane algorithm achieves the theoretical minimum number of calls to a separation oracle. Longstep versions of the algorithms for solving convex optimization problems are presented. 1
New Complexity Analysis of the PrimalDual Newton Method for Linear Optimization
, 1998
"... We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the pra ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for socalled largeupdate methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of largeupdate method before. Our new analysis not only provides a unified way for the analysis of both largeupdate and smallupdate methods, but also improves the known iteration bounds.
Theoretical Convergence of LargeStep PrimalDual Interior Point Algorithms for Linear Programming
 Mathematical Programming
, 1992
"... . This paper proposes two sets of rules, Rule G and Rule P, for controlling step lengths in a generic primaldual interior point method for solving the linear programming problem in standard form and its dual. Theoretically, Rule G ensures the global convergence, while Rule P, which is a special cas ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
. This paper proposes two sets of rules, Rule G and Rule P, for controlling step lengths in a generic primaldual interior point method for solving the linear programming problem in standard form and its dual. Theoretically, Rule G ensures the global convergence, while Rule P, which is a special case of Rule G, ensures the O(nL) iteration polynomialtime computational complexity. Both rules depend only on the lengths of the steps from the current iterates in the primal and dual spaces to the respective boundaries of the primal and dual feasible regions. They rely neither on neighborhoods of the central trajectory nor on potential function. These rules allow large steps without performing any line search. Rule G is especially flexible enough for implementation in practically efficient primaldual interior point algorithms. Key words: PrimalDual Interior Point Algorithm, Linear Program, Large Step, Global Convergence, PolynomialTime Convergence Abbreviated Title: LargeStep PrimalDual...
Optimizing Eigenvalues of Symmetric Definite Pencils
 in Proceedings of the 1994 American Control Conference
, 1994
"... We consider the following quasiconvex optimization problem: minimize the largest eigenvalue of a symmetric definite matrix pencil depending on parameters. A new form of optimality conditions is given, emphasizing a complementarity condition on primal and dual matrices. Newton's method is then applie ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We consider the following quasiconvex optimization problem: minimize the largest eigenvalue of a symmetric definite matrix pencil depending on parameters. A new form of optimality conditions is given, emphasizing a complementarity condition on primal and dual matrices. Newton's method is then applied to these conditions to give a new quadratically convergent interiorpoint method which works well in practice. The algorithm is closely related to primaldual interiorpoint methods for semidefinite programming. 1. Introduction Many matrix inequality problems in control can be cast in the form: minimize the maximum eigenvalue of the Hermitian definite pencil (A(x); B(x)), w.r.t. a parameter vector x, subject to positive definite constraints on B(x) and sometimes also on other Hermitian matrix functions of x. The maximum eigenvalue is a quasiconvex function of the pencil elements and therefore of the parameter vector x if A, B depend affinely on x. This quasiconvexity reduces to convexity i...
An Inexact TrustRegion FeasiblePoint Algorithm for Nonlinear Systems of Equalities and Inequalities
 Department of Computational and Applied Mathematics, Rice University
, 1995
"... In this work we define a trustregion feasiblepoint algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m → R^p is continuously differentiable. This formulation is quite general; the KarushKuhnTucker condi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this work we define a trustregion feasiblepoint algorithm for approximating solutions of the nonlinear system of equalities and inequalities F(x, y) = 0, y ≥ 0, where F: R^n × R^m → R^p is continuously differentiable. This formulation is quite general; the KarushKuhnTucker conditions of a general nonlinear programming problem are an obvious example, and a set of equalities and inequalities can be transformed, using slack variables, into such form. We will be concerned with the possibility that n, m, and p may be large and that the Jacobian matrix may be sparse and rank deficient. Exploiting the convex structure of the local model trustregion subproblem, we propose a globally convergent inexact trustregion feasiblepoint algorithm to minimize an arbitrary norm of the residual, say F(x, y)a, subject to the nonnegativity constraints. This algorithm uses a trustregion globalization strategy to determine a descent direction as an inexact solution of the local model trustregion subproblem and then, it uses linesearch techniques to obtain an acceptable steplength. We demonstrate that, under rather weak hypotheses, any accumulation point of the iteration sequence is a constrained stationary point for f = Fa, and that the sequence of constrained residuals converges to zero.
A Quadratically Convergent Polynomial LongStep Algorithm For A Class Of Nonlinear Monotone Complementarity Problems
, 1999
"... . Several interior point algorithms have been proposed for solving nonlinear monotone complementarity problems. Some of them have polynomial worstcase complexity but have to confine to short steps, whereas some of the others can take long steps but no polynomial complexity is proven. This paper pre ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Several interior point algorithms have been proposed for solving nonlinear monotone complementarity problems. Some of them have polynomial worstcase complexity but have to confine to short steps, whereas some of the others can take long steps but no polynomial complexity is proven. This paper presents an algorithm which is both longstep and polynomial. In addition, the sequence generated by the algorithm, as well as the corresponding complementarity gap, converges quadratically. The proof of the polynomial complexity requires that the monotone mapping satisfies a scaled Lipschitz condition, while the quadratic rate of convergence is derived under the assumptions that the problem has a strictly complementary solution and that the Jacobian of the mapping satisfies certain regularity conditions. Keywords: Complexity of Algorithms, Interior Point Methods, Monotone Complementarity Problems, Rate of Convergence. 1 The research is partially supported by Grant RP930033 of National Universi...