Results 1 
4 of
4
Explicit solutions for interval semidefinite linear programs
 Linear Algebra Appl
, 1996
"... We consider the special class of semidefinite linear programs (IV P) maximize traceCX subject to L ≼ A(X) ≼ U, where C, X, L, U are symmetric matrices, A is a (onto) linear operator, and ≼ denotes the Löwner (positive semidefinite) partial order. We present explicit representations for the general ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider the special class of semidefinite linear programs (IV P) maximize traceCX subject to L ≼ A(X) ≼ U, where C, X, L, U are symmetric matrices, A is a (onto) linear operator, and ≼ denotes the Löwner (positive semidefinite) partial order. We present explicit representations for the general primal and dual optimal solutions. This extends the results for standard linear programming that appeared in BenIsrael and Charnes, 1968. This work is further motivated by the explicit solutions for a different class of semidefinite problems presented recently in Yang and Vanderbei, 1993.
Optimizing Eigenvalues of Symmetric Definite Pencils
 in Proceedings of the 1994 American Control Conference
, 1994
"... We consider the following quasiconvex optimization problem: minimize the largest eigenvalue of a symmetric definite matrix pencil depending on parameters. A new form of optimality conditions is given, emphasizing a complementarity condition on primal and dual matrices. Newton's method is then applie ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We consider the following quasiconvex optimization problem: minimize the largest eigenvalue of a symmetric definite matrix pencil depending on parameters. A new form of optimality conditions is given, emphasizing a complementarity condition on primal and dual matrices. Newton's method is then applied to these conditions to give a new quadratically convergent interiorpoint method which works well in practice. The algorithm is closely related to primaldual interiorpoint methods for semidefinite programming. 1. Introduction Many matrix inequality problems in control can be cast in the form: minimize the maximum eigenvalue of the Hermitian definite pencil (A(x); B(x)), w.r.t. a parameter vector x, subject to positive definite constraints on B(x) and sometimes also on other Hermitian matrix functions of x. The maximum eigenvalue is a quasiconvex function of the pencil elements and therefore of the parameter vector x if A, B depend affinely on x. This quasiconvexity reduces to convexity i...
The longstep method of analytic centers for fractional problems
 Mathematical Programming
, 1997
"... We develop a longstep surfacefollowing version of the method of analytic centers for the fractionallinear problem min {t0  t0B(x) − A(x) ∈ H, B(x) ∈ K, x ∈ G}, where H is a closed convex domain, K is a convex cone contained in the recessive cone of H, G is a convex domain and B(·), A(·) are a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We develop a longstep surfacefollowing version of the method of analytic centers for the fractionallinear problem min {t0  t0B(x) − A(x) ∈ H, B(x) ∈ K, x ∈ G}, where H is a closed convex domain, K is a convex cone contained in the recessive cone of H, G is a convex domain and B(·), A(·) are affine mappings. Tracing a twodimensional surface of analytic centers rather than the usual path of centers allows to skip the initial “centering ” phase of the pathfollowing scheme. The proposed longstep policy of tracing the surface fits the best known overall polynomialtime complexity bounds for the method and, at the same time, seems to be more attractive computationally than the shortstep policy, which was previously the only one giving good complexity bounds. 1
The Finite CrissCross Method for Hyperbolic Programming
 Informatica, Technische Universiteit Delft, The Netherlands
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm. Key words: hyperbolic programming, pivoting, crisscross method iii 1 Introduction The hyperbolic (fractional linear) programming problem is a natural generalization of the linear programming problem. The linear constraints are kept, but the linear objective function is replaced by a quotient of two linear functions. Such fractional linear objective functions arise in economical models when the goal is to optimize profit/allocation type functions (see for instance [12]). The objective function of the hyperbolic programming problem is neither linear nor convex, however there are several ...