Results 1 
7 of
7
Sums of squares, moment matrices and optimization over polynomials
, 2008
"... We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory ..."
Abstract

Cited by 59 (8 self)
 Add to MetaCart
We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zerodimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.
THE OPERATOR Ψ FOR THE CHROMATIC NUMBER OF A GRAPH
, 2008
"... We investigate hierarchies of semidefinite approximations for the chromatic number χ(G) of a graph G. We introduce an operator Ψ mapping any graph parameter β(G), nested between the stability number α(G) and χ(G), to a new graph parameter Ψβ(G), nested between α(G) and χ(G); Ψβ(G) is polynomial ti ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We investigate hierarchies of semidefinite approximations for the chromatic number χ(G) of a graph G. We introduce an operator Ψ mapping any graph parameter β(G), nested between the stability number α(G) and χ(G), to a new graph parameter Ψβ(G), nested between α(G) and χ(G); Ψβ(G) is polynomial time computable if β(G) is. As an application, there is no polynomial time computable graph parameter nested between the fractional chromatic number χ ∗ (·) and χ(·) unless P = NP. Moreover, based on the Motzkin–Straus formulation for α(G), we give (quadratically constrained) quadratic and copositive programming formulations for χ(G). Under some mild assumptions, n/β(G) ≤ Ψβ(G), but, while n/β(G) remains below χ ∗ (G), Ψβ(G) can reach χ(G) (e.g., for β(·) =α(·)). We also define new polynomial time computable lower bounds for χ(G), improving the classic Lovász theta number (and its strengthenings obtained by adding nonnegativity and triangle inequalities); experimental results on Hamming graphs, Kneser graphs, and DIMACS benchmark graphs will be given in the followup paper [N. Gvozdenović and M. Laurent, SIAM J. Optim., 19 (2008), pp. 592–615].
Symmetric tensor approximation hierarchies for the completely positive cone
, 2012
"... In this paper we construct two approximation hierarchies for the completely positive cone based on symmetric tensors. We show that one hierarchy corresponds to dual cones of a known polyhedral approximation hierarchy for the copositive cone, and the other hierarchy corresponds to dual cones of a kno ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we construct two approximation hierarchies for the completely positive cone based on symmetric tensors. We show that one hierarchy corresponds to dual cones of a known polyhedral approximation hierarchy for the copositive cone, and the other hierarchy corresponds to dual cones of a known semidefinite approximation hierarchy for the copositive cone. As an application, we consider a class of bounds on the stability number of a graph obtained from the polyhedral approximation hierarchy, and we construct a primal optimal solution with its tensor lifting for each of such linear programs.
APPLYING THE BOUNDARY POINT METHOD TO AN SDP RELAXATION OF THE MAXIMUM INDEPENDENT SET PROBLEM FOR A BRANCH AND BOUND ALGORITHM
"... A common method, originally introduced by Lovász in 1979, for calculating an upper bound on the size of a maximum independent set for a graph is to consider a relaxation of the problem expressed as a semidefinite program (SDP). Today, the most prevalent method for solving a general SDP is with a pri ..."
Abstract
 Add to MetaCart
A common method, originally introduced by Lovász in 1979, for calculating an upper bound on the size of a maximum independent set for a graph is to consider a relaxation of the problem expressed as a semidefinite program (SDP). Today, the most prevalent method for solving a general SDP is with a primaldual interior point method (IPM). These methods are highly developed, provide reliable convergence, and parallelize relatively well on a shared memory architecture. However, they are severely limited by their memory requirements, which grow with the square of the number of edges in the graph. Here, we investigate the boundary point method (BPM) developed by Povh, Rendl, and Weigele in 2006. Storage for this method grows as the square of the number of nodes in the graph, allowing us to bound much larger graphs. We have implemented the boundary point method in C within a branchandbound framework and discuss several methods used within that framework aimed at increasing the efficiency of the algorithm. We also compare the BPM with
OPTIMA Mathematical Optimization Society Newsletter
, 2012
"... and frantic search for material of interest, I stumbled on this most appropriate motto: Optimum is maximum at a minimum – due to the nujazz artist Mr Gaus (no, this is not made up). In view of the coming ISMP festivities, this sounded a marvelous quote, at least if properly interpreted. Indeed, it ..."
Abstract
 Add to MetaCart
and frantic search for material of interest, I stumbled on this most appropriate motto: Optimum is maximum at a minimum – due to the nujazz artist Mr Gaus (no, this is not made up). In view of the coming ISMP festivities, this sounded a marvelous quote, at least if properly interpreted. Indeed, it would be completely wrong to suggest that “Optimum is maximum at a minimum in Berlin”, with the implication that the next ISMP is a minimum! In fact it is truly the opposite: a global maximum in the history of major MOS meetings. The latest numbers sent to me by Martin Skutella, the very active chair of the organization committee, show that the next ISMP will feature more than 1700 talks and about 600 invited and contributed sessions in 24 program clusters! The downside of this resounding success is that 40 parallel tracks will be necessary... You will also be approximately 2000 participants from more than 60 countries from all over the world. ISMP 2012 will thus be the biggest ISMP so
A Complete Characterization of the Gap between Convexity and SOSConvexity
"... Our first contribution in this paper is to prove that three natural sum of squares (sos) based sufficient conditions for convexity of polynomials via the definition of convexity, its first order characterization, and its second order characterization are equivalent. These three equivalent algebraic ..."
Abstract
 Add to MetaCart
Our first contribution in this paper is to prove that three natural sum of squares (sos) based sufficient conditions for convexity of polynomials via the definition of convexity, its first order characterization, and its second order characterization are equivalent. These three equivalent algebraic conditions, henceforth referred to as sosconvexity, can be checked by semidefinite programming whereas deciding convexity is NPhard. If we denote the set of convex and sosconvex polynomials in n variables of degree d with ˜ Cn,d and ˜ ΣCn,d respectively, then our main contribution is to prove that ˜ Cn,d = ˜ ΣCn,d if and only if n = 1 or d = 2 or (n, d) = (2, 4). We also present a complete characterization for forms (homogeneous polynomials) except for the case (n, d) = (3, 4) which is joint work with G. Blekherman and is to be published elsewhere. Our result states that the set Cn,d of convex forms in n variables of degree d equals the set ΣCn,d of sosconvex forms if and only if n = 2 or d = 2 or (n, d) = (3, 4). To prove these results, we present in particular explicit examples of polynomials in ˜ C2,6 \ ˜ ΣC2,6 and ˜ C3,4 \ ˜ ΣC3,4 and forms in C3,6 \ ΣC3,6 and C4,4 \ ΣC4,4, and a general procedure for constructing forms in Cn,d+2 \ ΣCn,d+2 from nonnegative but not sos forms in n variables and degree d. Although for disparate reasons, the remarkable outcome is that convex polynomials (resp. forms) are sosconvex exactly in cases where nonnegative polynomials (resp. forms) are sums of squares, as characterized by Hilbert. 1