Results 1  10
of
98
Solving LargeScale Sparse Semidefinite Programs for Combinatorial Optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational re ..."
Abstract

Cited by 114 (11 self)
 Add to MetaCart
We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational results of interiorpoint algorithms for approximating the maximum cut semidefinite programs with dimension upto 3000.
Semidefinite Programming and Combinatorial Optimization
 DOC. MATH. J. DMV
, 1998
"... We describe a few applications of semide nite programming in combinatorial optimization. ..."
Abstract

Cited by 97 (1 self)
 Add to MetaCart
We describe a few applications of semide nite programming in combinatorial optimization.
Approximating the cutnorm via Grothendieck’s inequality
 Proc. of the 36 th ACM STOC
, 2004
"... ..."
Lectures on modern convex optimization
 Society for Industrial and Applied Mathematics (SIAM
, 2001
"... Mathematical Programming deals with optimization programs of the form and includes the following general areas: minimize f(x) subject to gi(x) ≤ 0, i = 1,..., m, [x ⊂ R n] 1. Modelling: methodologies for posing various applied problems as optimization programs; 2. Optimization Theory, focusing on e ..."
Abstract

Cited by 92 (7 self)
 Add to MetaCart
Mathematical Programming deals with optimization programs of the form and includes the following general areas: minimize f(x) subject to gi(x) ≤ 0, i = 1,..., m, [x ⊂ R n] 1. Modelling: methodologies for posing various applied problems as optimization programs; 2. Optimization Theory, focusing on existence, uniqueness and on characterization of optimal solutions to optimization programs; 3. Optimization Methods: development and analysis of computational algorithms for various classes of optimization programs; 4. Implementation, testing and application of modelling methodologies and computational algorithms. Essentially, Mathematical Programming was born in 1948, when George Dantzig has invented Linear Programming – the class of optimization programs (P) with linear objective f(·) and
Approximating Quadratic Programming With Bound Constraints
 Mathematical Programming
, 1997
"... We consider the problem of approximating the global maximum of a quadratic program (QP) with n variables subject to bound constraints. Based on the results of Goemans and Williamson [4] and Nesterov [6], we show that a 4=7 approximate solution can be obtained in polynomial time. Key words. Quadratic ..."
Abstract

Cited by 67 (13 self)
 Add to MetaCart
We consider the problem of approximating the global maximum of a quadratic program (QP) with n variables subject to bound constraints. Based on the results of Goemans and Williamson [4] and Nesterov [6], we show that a 4=7 approximate solution can be obtained in polynomial time. Key words. Quadratic programming, global maximizer, approximation algorithm This author is supported in part by NSF grant DMI9522507. 1 Introduction Consider the quadratic programming (QP) problem ¯ q(Q) := Maximize q(x) := x T Qx (QP) Subject to \Gammae x e; where Q 2 ! n\Thetan is given and e 2 ! n is the vector of all ones. Let ¯ x = ¯ x(Q) be a maximizer of the problem. In this paper, without loss of generality, we assume that ¯ x 6= 0. Normally, there is a linear term in the objective function: q(x) = x T Qx + c T x. However, the problem can be homogenized as Maximize q(x) := x T Qx + tc T x Subject to \Gammae x e; \Gamma1 t 1 by adding a scalar variable t. There always is an opti...
Outward rotations: a tool for rounding solutions of semidefinite programming relaxations, with applications to MAX CUT and other problems
, 1999
"... We present a tool, outward rotations, for enhancing the performance of several semidefinite programming based approximation algorithms. Using outward rotations, we obtain an approximation algorithm for MAX CUT that, in many interesting cases, performs better than the algorithm of Goemans and William ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
We present a tool, outward rotations, for enhancing the performance of several semidefinite programming based approximation algorithms. Using outward rotations, we obtain an approximation algorithm for MAX CUT that, in many interesting cases, performs better than the algorithm of Goemans and Williamson. We also obtain an improved approximation algorithm for MAX NAEf3gSAT. Finally, we provide some evidence that outward rotations can also be used to obtain improved approximation algorithms for MAX NAESAT and MAX SAT. 1 Introduction MAX CUT is perhaps the simplest and most natural APXcomplete constraint satisfaction problem (see, e.g., [AL97]). There are various simple ways of obtaining a performance guarantee of 1/2 for the problem. One of them, for example, is just choosing a random cut. No performance guarantee better than 1/2 was known for the problem until Goemans and Williamson [GW95], in a major breakthrough, used semidefinite programming to obtain an approximation algorithm ...
On Maximization of Quadratic Form over Intersection of Ellipsoids with Common Center
, 1998
"... . We demonstrate that if A 1 ; :::; Am are symmetric positive semidefinite n \Theta n matrices with positive definite sum and A is an arbitrary symmetric n \Theta n matrix, then the relative accuracy, in terms of the optimal value, of the semidefinite relaxation max X fTr(AX) j Tr(A i X) 1; i = 1 ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
. We demonstrate that if A 1 ; :::; Am are symmetric positive semidefinite n \Theta n matrices with positive definite sum and A is an arbitrary symmetric n \Theta n matrix, then the relative accuracy, in terms of the optimal value, of the semidefinite relaxation max X fTr(AX) j Tr(A i X) 1; i = 1; :::; m; X 0g (SDP) of the optimization program x T Ax ! max j x T A i x 1; i = 1; :::; m (P) is not worse than 1 \Gamma 1 2 ln(2m 2 ) . It is shown that this bound is sharp in order, as far as the dependence on m is concerned, and that a feasible solution x to (P) with x T Ax Opt(SDP) 2 ln(2m 2 ) () can be found efficiently. This somehow improves one of the results of Nesterov [4] where bound similar to (*) is established for the case when all A i are of rank 1. Keywords: Semidefinite relaxations, quadratic programming 1. Introduction Let A i , i = 1; :::; m, be positive semidefinite n \Theta n matrices with positive definite sum, and A be a n \Theta n symmetric matrix. Con...
Cones Of Matrices And Successive Convex Relaxations Of Nonconvex Sets
, 2000
"... . Let F be a compact subset of the ndimensional Euclidean space R n represented by (finitely or infinitely many) quadratic inequalities. We propose two methods, one based on successive semidefinite programming (SDP) relaxations and the other on successive linear programming (LP) relaxations. Each ..."
Abstract

Cited by 47 (20 self)
 Add to MetaCart
. Let F be a compact subset of the ndimensional Euclidean space R n represented by (finitely or infinitely many) quadratic inequalities. We propose two methods, one based on successive semidefinite programming (SDP) relaxations and the other on successive linear programming (LP) relaxations. Each of our methods generates a sequence of compact convex subsets C k (k = 1, 2, . . . ) of R n such that (a) the convex hull of F # C k+1 # C k (monotonicity), (b) # # k=1 C k = the convex hull of F (asymptotic convergence). Our methods are extensions of the corresponding LovaszSchrijver liftandproject procedures with the use of SDP or LP relaxation applied to general quadratic optimization problems (QOPs) with infinitely many quadratic inequality constraints. Utilizing descriptions of sets based on cones of matrices and their duals, we establish the exact equivalence of the SDP relaxation and the semiinfinite convex QOP relaxation proposed originally by Fujie and Kojima. Using th...
On Lagrangian relaxation of quadratic matrix constraints
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equ ..."
Abstract

Cited by 45 (17 self)
 Add to MetaCart
Abstract. Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations. For several special cases of QQP, e.g., convex programs and trust region subproblems, the Lagrangian relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective. In this paper we consider a certain QQP where the quadratic constraints correspond to the matrix orthogonality condition XXT = I. For this problem we show that the Lagrangian dual based on relaxing the constraints XXT = I and the seemingly redundant constraints XT X = I has a zero duality gap. This result has natural applications to quadratic assignment and graph partitioning problems, as well as the problem of minimizing the weighted sum of the largest eigenvalues of a matrix. We also show that the technique of relaxing quadratic matrix constraints can be used to obtain a strengthened semidefinite relaxation for the maxcut problem. Key words. Lagrangian relaxations, quadratically constrained quadratic programs, semidefinite programming, quadratic assignment, graph partitioning, maxcut problems
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial