Results 11  20
of
267
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity&quot;, Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterat ..."
Abstract

Cited by 92 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
A robust minimax approach to classification
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classcondi ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classconditional distributions. Misclassification probabilities are then controlled in a worstcase setting: that is, under all possible choices of classconditional densities with given mean and covariance matrix, we minimize the worstcase (maximum) probability of misclassification of future data points. For a linear decision boundary, this desideratum is translated in a very direct way into a (convex) second order cone optimization problem, with complexity similar to a support vector machine problem. The minimax problem can be interpreted geometrically as minimizing the maximum of the Mahalanobis distances to the two classes. We address the issue of robustness with respect to estimation errors (in the means and covariances of the
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 63 (12 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Robust Solutions To Uncertain Semidefinite Programs
, 1998
"... In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist, as SDPs. When the perturbation is "full", our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique, and continuous (Hölderstable) with respect to the unperturbed problems' data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation and integer programming.
A cone Complementarity Linearization Algorithm for Static OutputFeedback and Related Problems
 IEEE Transaction on Automatic Control
, 1997
"... Abstract—This paper describes a linear matrix inequality (LMI)based algorithm for the static and reducedorder outputfeedback synthesis problems of nthorder linear timeinvariant (LTI) systems with nu (respectively, ny) independent inputs (respectively, outputs). The algorithm is based on a “cone ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
Abstract—This paper describes a linear matrix inequality (LMI)based algorithm for the static and reducedorder outputfeedback synthesis problems of nthorder linear timeinvariant (LTI) systems with nu (respectively, ny) independent inputs (respectively, outputs). The algorithm is based on a “cone complementarity ” formulation of the problem and is guaranteed to produce a stabilizing controller of order m n 0 max(nu;ny), matching a generic stabilizability result of Davison and Chatterjee [7]. Extensive numerical experiments indicate that the algorithm finds a controller with order less than or equal to that predicted by Kimura’s generic stabilizability result (m n0nu0ny+1). A similar algorithm can be applied to a variety of control problems, including robust control synthesis. Index Terms — Complementarity problem, linear matrix inequality, reducedorder stabilization, static output feedback. I.
Symmetric PrimalDual Path Following Algorithms for Semidefinite Programming
, 1996
"... In this paper a symmetric primaldual transformation for positive semidefinite programming is proposed. For standard SDP problems, after this symmetric transformation the primal variables and the dual slacks become identical. In the context of linear programming, existence of such a primaldual tran ..."
Abstract

Cited by 56 (10 self)
 Add to MetaCart
In this paper a symmetric primaldual transformation for positive semidefinite programming is proposed. For standard SDP problems, after this symmetric transformation the primal variables and the dual slacks become identical. In the context of linear programming, existence of such a primaldual transformation is a well known fact. Based on this symmetric primaldual transformation we derive Newton search directions for primaldual pathfollowing algorithms for semidefinite programming. In particular, we generalize: (1) the short step path following algorithm, (2) the predictorcorrector algorithm and (3) the largest step algorithm to semidefinite programming. It is shown that these algorithms require at most O( p n j log ffl j) main iterations for computing an ffloptimal solution. The symmetric primaldual transformation discussed in this paper can be interpreted as a specialization of the scalingpoint concept introduced by Nesterov and Todd [12] for selfscaled conic problems. The ...
An interiorpoint method for largescale ℓ1regularized logistic regression
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2007
"... Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ℓ1regularized leastsquares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interiorpoint methods, at least for small and medium size problems. In this paper, we describe a specialized interiorpoint method for solving largescale ℓ1regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interiorpoint method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
Derivatives of Spectral Functions
, 1996
"... A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We give a simple proof that this spectral function is differentiable at X if and only if the function f is differentiable at the vector (X), and we give a concise formula for the derivative. We then apply this formula to deduce an analogous expression for the Clarke generalized gradient of the spectral function. A similar result holds for real symmetric matrices. 1 Introduction and notation Optimization problems involving a symmetric matrix variable, X say, frequently involve symmetric functions of the eigenvalues of X in the objective or constraints. Examples include the maximum eigenvalue of X, or log(det X) (for positive definite X), or eigenvalue constraints such as positive semidefinit...
Power control by geometric programming
 IEEE Trans. on Wireless Commun
, 2005
"... Abstract — In wireless cellular or ad hoc networks where Quality of Service (QoS) is interferencelimited, a variety of power control problems can be formulated as nonlinear optimization with a systemwide objective, e.g., maximizing the total system throughput or the worst user throughput, subject ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
Abstract — In wireless cellular or ad hoc networks where Quality of Service (QoS) is interferencelimited, a variety of power control problems can be formulated as nonlinear optimization with a systemwide objective, e.g., maximizing the total system throughput or the worst user throughput, subject to QoS constraints from individual users, e.g., on data rate, delay, and outage probability. We show that in the high SignaltoInterference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimization problems in the form of geometric programming; hence they can be very efficiently solved for global optimality even with a large number of users. In the medium to low SIR regime, some of these constrained nonlinear optimization of power control cannot be turned into tractable convex formulations, but a heuristic can be used to compute in most cases the optimal solution by solving a series of geometric programs through the approach of successive convex approximation. While efficient and robust algorithms have been extensively studied for centralized solutions of geometric programs, distributed algorithms have not been explored before. We present a systematic method of distributed algorithms for power control that is geometricprogrammingbased. These techniques for power control, together with their implications to admission control and pricing in wireless networks, are illustrated through several numerical examples. Index Terms — Convex optimization, CDMA power control, Distributed algorithms. I.