Results 1  10
of
32
Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming
 Journal of the ACM
, 1995
"... We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution ..."
Abstract

Cited by 958 (14 self)
 Add to MetaCart
We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least .87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of ...
The Dense kSubgraph Problem
 Algorithmica
, 1999
"... This paper considers the problem of computing the dense kvertex subgraph of a given graph, namely, the subgraph with the most edges. An approximation algorithm is developed for the problem, with approximation ratio O(n ffi ), for some ffi ! 1=3. 1 Introduction We study the dense ksubgraph (D ..."
Abstract

Cited by 164 (7 self)
 Add to MetaCart
This paper considers the problem of computing the dense kvertex subgraph of a given graph, namely, the subgraph with the most edges. An approximation algorithm is developed for the problem, with approximation ratio O(n ffi ), for some ffi ! 1=3. 1 Introduction We study the dense ksubgraph (DkS) maximization problem, of computing the dense k vertex subgraph of a given graph. That is, on input a graph G and a parameter k, we are interested in finding a set of k vertices with maximum average degree in the subgraph induced by this set. As this problem is NPhard (say, by reduction from Clique), we consider approximation algorithms for this problem. We obtain a polynomial time algorithm that on any input (G; k) returns a subgraph of size k whose average degree is within a factor of at most n ffi from the optimum solution, where n is the number of vertices in the input graph G, and ffi ! 1=3 is some universal constant. Unfortunately, we are unable to present a complementary negati...
The Complex Structures Singular Value
, 1993
"... A tutorial introduction to the complex structured singular value (µ) is presented, with an emphasis on the mathematical aspects of µ. The µbased methods discussed here have been useful for analyzing the performance and robustness properties of linear feedback systems. Several tests ..."
Abstract

Cited by 119 (10 self)
 Add to MetaCart
A tutorial introduction to the complex structured singular value (µ) is presented, with an emphasis on the mathematical aspects of µ. The µbased methods discussed here have been useful for analyzing the performance and robustness properties of linear feedback systems. Several tests
Some Applications of Laplace Eigenvalues of Graphs
 GRAPH SYMMETRY: ALGEBRAIC METHODS AND APPLICATIONS, VOLUME 497 OF NATO ASI SERIES C
, 1997
"... In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of graphs, the maxcut problem and its relation to semidefinite programming, rapid mixing of Markov chains, and to extensions of the results to infinite graphs.
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Rank Minimization under LMI constraints: A Framework for Output Feedback Problems
, 1993
"... Convex optimisation techniques for solving linear matrix inequalities have been recently applied to robust statefeedback synthesis for uncertain systems. The approach does not seem to extend easily to the output feedback problem. In this paper, we propose a single framework for addressing a number ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Convex optimisation techniques for solving linear matrix inequalities have been recently applied to robust statefeedback synthesis for uncertain systems. The approach does not seem to extend easily to the output feedback problem. In this paper, we propose a single framework for addressing a number of output feedback stabilization problems for LTI systems. This includes static output feedback stabilization, dynamic reducedorder outputfeedback stabilization, reducedorder H1 synthesis and synthesis with constant scalings. Keywords: Output feedback stabilization, linear matrix inequalities, convex optimization, H1 and synthesis. Dept. InformatiqueAutomatique, Ecole Nationale Sup'erieure de Techniques Avanc'ees, 32, Bd. Victor, 75015 Paris, France, email: elghaoui@ensta.fr. Tel: (331) 45 52 54 30, Fax: (331) 45 52 55 87. y INRIARocquencourt, BP 105, 78153 Le Chesnay Cedex, France, email: gahinet@rossini.inria.fr. 1 1 Introduction Linear Matrix Inequalities (LMI) are ineq...
A Predictor Corrector Method for Semidefinite Linear Programming
, 1995
"... In this paper we present a generalization of the predictor corrector method of linear programming problem to semidefinite linear programming problem. We consider a direction which, we show, belongs to a family of directions presented by Kojima, Shindoh and Hara, and, one of the directions analyzed b ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
In this paper we present a generalization of the predictor corrector method of linear programming problem to semidefinite linear programming problem. We consider a direction which, we show, belongs to a family of directions presented by Kojima, Shindoh and Hara, and, one of the directions analyzed by Monteiro. We show that starting with the initial complementary slackness violation of t 0 , in O(jlog( ffl t 0 )j p n) iterations of the predictor corrector method, the complementary slackness violation can be reduced to less than or equal to ffl ? 0. We also analyze a modified corrector direction in which the linear system to be solved differs from that of the predictor in only the right hand side, and obtain a similar bound. We then use this modified corrector step in an implementable method which is shown to take a total of O(jlog( ffl t 0 )j p nlog(n)) predictor and corrector steps. Key words: Linear programming, Semidefinite programming, Interior point methods, Path following, ...
Polynomial Convergence of a New Family of PrimalDual Algorithms for Semidefinite Programming
, 1996
"... This paper establishes the polynomial convergence of a new class of (feasible) primaldual interiorpoint path following algorithms for semidefinite programming (SDP) whose search directions are obtained by applying Newton method to the symmetric central path equation (P T XP ) 1=2 (P \Gamma1 ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
This paper establishes the polynomial convergence of a new class of (feasible) primaldual interiorpoint path following algorithms for semidefinite programming (SDP) whose search directions are obtained by applying Newton method to the symmetric central path equation (P T XP ) 1=2 (P \Gamma1 SP \GammaT )(P T XP ) 1=2 \Gamma I = 0; where P is a nonsingular matrix. Specifically, we show that the shortstep path following algorithm based on the Frobenius norm neighborhood and the semilongstep path following algorithm based on the operator 2norm neighborhood have O( p nL) and O(nL) iterationcomplexity bounds, respectively. When P = I, this yields the first polynomially convergent semilongstep algorithm based on a pure Newton direction. Restricting the scaling matrix P at each iteration to a certain subset of nonsingular matrices, we are able to establish an O(n 3=2 L) iterationcomplexity for the longstep path following method. The resulting subclass of search direct...
Implementation of PrimalDual Methods for Semidefinite Programming Based on Monteiro and Tsuchiya Newton Directions and their Variants
 TECHNICAL REPORT, SCHOOL INDUSTRIAL AND SYSTEMS ENGINEERING, GEORGIA TECH., ATLANTA, GA 30332
, 1997
"... Monteiro and Tsuchiya [23] have proposed two primaldual Newton directions for semidefinite programming, referred to as the MT directions, and established polynomial convergence of path following methods based on them. This paper reports some computational results on the performance of interiorpoin ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Monteiro and Tsuchiya [23] have proposed two primaldual Newton directions for semidefinite programming, referred to as the MT directions, and established polynomial convergence of path following methods based on them. This paper reports some computational results on the performance of interiorpoint predictorcorrector methods based on the MT directions and a variant of these directions, called the SChMT direction. We discuss how to compute these directions efficiently and derive their corresponding computational complexities. A main feature of our analysis is that computational formulae for these directions are derived from a unified point of view which entirely avoids the use of Kronecker product. Using this unified approach, we also present schemes to compute the AlizadehHaeberlyOverton (AHO) direction, the NesterovTodd direction and the HRVW/KSH/M direction with computational complexities (for dense problems) better than previously reported in the literature. Our computational...