Results 1  10
of
13
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Optimizing dominant time constant in RC circuits
, 1996
"... We propose to use the dominant time constant of a resistorcapacitor (RC) circuit as a measure of the signal propagation delay through the circuit. We show that the dominant time constant is a quasiconvex function of the conductances and capacitances, and use this property to cast several interestin ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
We propose to use the dominant time constant of a resistorcapacitor (RC) circuit as a measure of the signal propagation delay through the circuit. We show that the dominant time constant is a quasiconvex function of the conductances and capacitances, and use this property to cast several interesting design problems as convex optimization problems, specifically, semidefinite programs (SDPs). For example, assuming that the conductances and capacitances are affine functions of the design parameters (which is a common model in transistor or interconnect wire sizing), one can minimize the power consumption or the area subject to an upper bound on the dominant time constant, or compute the optimal tradeoff surface between power, dominant time constant, and area. We will also note that, to a certain extent, convex optimization can be used to design the topology of the interconnect wires. This approach has two advantages over methods based on Elmore delay optimization. First, it handles a far wider class of circuits, e.g., those with nongrounded capacitors. Second, it always results in convex optimization problems for which very efficient interiorpoint methods have recently been developed. We illustrate the method, and extensions, with several examples involving optimal wire and transistor sizing.
Control System Analysis And Synthesis Via Linear Matrix Inequalities
, 1993
"... A wide variety of problems in systems and control theory can be cast or recast as convex problems that involve linear matrix inequalities (LMIs). For a few very special cases there are "analytical solutions" to these problems, but in general they can be solved numerically very efficiently. In many c ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
A wide variety of problems in systems and control theory can be cast or recast as convex problems that involve linear matrix inequalities (LMIs). For a few very special cases there are "analytical solutions" to these problems, but in general they can be solved numerically very efficiently. In many cases the inequalities have the form of simultaneous Lyapunov or algebraic Riccati inequalities; such problems can be solved in a time that is comparable to the time required to solve the same number of Lyapunov or Algebraic Riccati equations. Therefore the computational cost of extending current control theory that is based on the solution of algebraic Riccati equations to a theory based on the solution of (multiple, simultaneous) Lyapunov or Riccati inequalities is modest. Examples include: multicriterion LQG, synthesis of linear state feedback for multiple or nonlinear plants ("multimodel control"), optimal transfer matrix realization, norm scaling, synthesis of multipliers for Popovlike...
Convex conic formulations of robust downlink precoder designs with quality of service constraints
 IEEE J. Select. Topics Signal Processing
, 2007
"... We consider the design of linear precoders (beamformers) for broadcast channels with Quality of Service (QoS) constraints for each user, in scenarios with uncertain channel state information (CSI) at the transmitter. We consider a deterministicallybounded model for the channel uncertainty of each u ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We consider the design of linear precoders (beamformers) for broadcast channels with Quality of Service (QoS) constraints for each user, in scenarios with uncertain channel state information (CSI) at the transmitter. We consider a deterministicallybounded model for the channel uncertainty of each user, and our goal is to design a robust precoder that minimizes the total transmission power required to satisfy the users ’ QoS constraints for all channels within a specified uncertainty region around the transmitter’s estimate of each user’s channel. Since this problem is not known to be computationally tractable, we will derive three conservative design approaches that yield convex and computationallyefficient restrictions of the original design problem. The three approaches yield semidefinite program (SDP) formulations that offer different tradeoffs between the degree of conservatism and the size of the SDP. We will also show how these conservative approaches can be used to derive efficientlysolvable quasiconvex restrictions of some related design problems, including the robust counterpart to the problem of maximizing the minimum signaltointerferenceplusnoiseratio (SINR) subject to a given power constraint. Our simulation results indicate that in the presence of uncertain CSI the proposed approaches can satisfy the users ’ QoS requirements for a significantly larger set of uncertainties than existing methods, and require less transmission power to do so.
Fast Algorithms for L ∞ Problems in Multiview Geometry
"... Many problems in multiview geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time. We show that these problems are instances of a convexconcave generalized fractional program. We survey the major solution methods for s ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Many problems in multiview geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time. We show that these problems are instances of a convexconcave generalized fractional program. We survey the major solution methods for solving problems of this form and present them in a unified framework centered around a single parametric optimization problem. We propose two new algorithms and show that the algorithm proposed by Olsson et al. [21] is a special case of a classical algorithm for generalized fractional programming. The performance of all the algorithms is compared on a variety of datasets, and the algorithm proposed by Gugat [12] stands out as a clear winner. An open source MATLAB toolbox thats implements all the algorithms presented here is made available. 1.
The longstep method of analytic centers for fractional problems
 Mathematical Programming
, 1997
"... We develop a longstep surfacefollowing version of the method of analytic centers for the fractionallinear problem min {t0  t0B(x) − A(x) ∈ H, B(x) ∈ K, x ∈ G}, where H is a closed convex domain, K is a convex cone contained in the recessive cone of H, G is a convex domain and B(·), A(·) are a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We develop a longstep surfacefollowing version of the method of analytic centers for the fractionallinear problem min {t0  t0B(x) − A(x) ∈ H, B(x) ∈ K, x ∈ G}, where H is a closed convex domain, K is a convex cone contained in the recessive cone of H, G is a convex domain and B(·), A(·) are affine mappings. Tracing a twodimensional surface of analytic centers rather than the usual path of centers allows to skip the initial “centering ” phase of the pathfollowing scheme. The proposed longstep policy of tracing the surface fits the best known overall polynomialtime complexity bounds for the method and, at the same time, seems to be more attractive computationally than the shortstep policy, which was previously the only one giving good complexity bounds. 1
Inversion Error, Condition Number, And Approximate Inverses Of Uncertain Matrices
 Inverses of Uncertain Matrices. Linear Algebra and its Applications
, 2000
"... The classical condition number is a very rough measure of the effect of perturbations on the inverse of a square matrix. First, it assumes the perturbation is infinitesimally small. Second, it does not take into account the perturbation structure (e.g., Vandermonde). Similarly, the classical notion ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The classical condition number is a very rough measure of the effect of perturbations on the inverse of a square matrix. First, it assumes the perturbation is infinitesimally small. Second, it does not take into account the perturbation structure (e.g., Vandermonde). Similarly, the classical notion of inverse of a matrix neglects the possibility of large, structured perturbations. We define a new quantity, the structured maximal inversion error, that takes into account both structure and non necessarily small perturbation size. When the perturbation is infinitesimal, we obtain a "structured condition number". We introduce the notion of approximate inverse, as a matrix that best approximates the inverse of a matrix with structured perturbations, when the perturbation varies in a given range. For a wide class of perturbation structures, we show how to use (convex) semidefinite programming to compute bounds on on the structured maximal inversion error and structured condition number, and compute an approximate inverse. The results are exact when the perturbation is "unstructured"we then obtain an analytic expression for the approximate inverse. When the perturbation is unstructured and additive, we recover the classical condition number; the approximate inverse is the operator related to the Total Least Squares (orthogonal regression) problem.
Algorithms and Software for LMI Problems in Control
 IEEE Control Systems Magazine
, 1997
"... this article is to provide an overview of the state of the art of ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this article is to provide an overview of the state of the art of
Ghaoui, Computing bounds for the structured singular value via an interior point algorithm
 In Proc. American Control Conf
, 1992
"... We describe an interior point algorithm for computing the upper bound for the structured singular value described in [1]. We demonstrate the performance of the algorithm on a simple example. 1. Notation R (C) stands for the set of real (complex) numbers. R m n (C m n) stands for the set of real (com ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
We describe an interior point algorithm for computing the upper bound for the structured singular value described in [1]. We demonstrate the performance of the algorithm on a simple example. 1. Notation R (C) stands for the set of real (complex) numbers. R m n (C m n) stands for the set of real (complex) m n matrices. For M 2 Cm m, det(M) stands for the determinant, max(M) the maximum singular value and M the complex conjugate of the transpose of M. In stands for the n n identity matrix. 2.