Results 1  10
of
31
Sums of squares, moment matrices and optimization over polynomials
, 2008
"... We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zerodimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.
A sum of squares approximation of nonnegative polynomials
 SIAM J. Optim
, 2006
"... Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real algebraic geometry; positive polynomials; sum of squares; semidefinite programming. AMS subject classifications. 12E05, 12Y05, 90C22 1. Introduction. The
approximations of nonnegative polynomials via simple high degree perturbations
 Math. Z
"... Abstract. We show that every real polynomial f nonnegative on [−1, 1] n can be approximated in the l1norm of coefficients, by a sequence of polynomials {fεr} that are sums of squares. This complements the existence of s.o.s. approximations in the denseness result of Berg, Christensen and Ressel, as ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
Abstract. We show that every real polynomial f nonnegative on [−1, 1] n can be approximated in the l1norm of coefficients, by a sequence of polynomials {fεr} that are sums of squares. This complements the existence of s.o.s. approximations in the denseness result of Berg, Christensen and Ressel, as we provide a very simple and explicit approximation sequence. Then we show that if the moment problem holds for a basic closed semialgebraic set KS ⊂ R n with nonempty interior, then every polynomial nonnegative on KS can be approximated in a similar fashion by elements from the corresponding preordering. Finally, we show that the degree of the perturbation in the approximating sequence depends on ɛ as well as the degree and the size of coefficients of the nonnegative polynomial f, but not on the specific values of its coefficients. 1.
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Positive polynomials in scalar and matrix variables, the spectral theorem, and optimization
 , in vol. Structured Matrices and Dilations. A Volume Dedicated to the Memory of Tiberiu Constantinescu
"... We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second par ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second part of the survey focuses on recently discovered connections between real algebraic geometry and optimization as well as polynomials in matrix variables and some control theory problems. These new applications have prompted a series of recent studies devoted to the structure of positivity and convexity in a free ∗algebra, the appropriate setting for analyzing inequalities on polynomials having matrix variables. We sketch some of these developments, add to them and comment on the rapidly growing literature.
Global optimization of polynomials using gradient tentacles and sums of squares
 SIAM Journal on Optimization
"... We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest low ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest lower bound of f on its gradient variety. Nie, Demmel and Sturmfels proved recently a theorem about the existence of sums of squares certificates for such lower bounds. Based on these certificates, they find arbitrarily tight relaxations of the original problem that can be formulated as semidefinite programs and thus be solved efficiently. We deal here with the more general case when f is bounded from below but does not necessarily attain a minimum. In this case, the method of Nie, Demmel and Sturmfels might yield completely wrong results. In order to overcome this problem, we replace the gradient variety by larger semialgebraic subsets of R n which we call gradient tentacles. It now gets substantially harder to prove the existence of the necessary sums of squares certificates.
Frontiers of reality in Schubert calculus
 Bulletin of the AMS
"... Abstract. The theorem of Mukhin, Tarasov, and Varchenko (formerly the Shapiro conjecture for Grassmannians) asserts that all (a priori complex) solutions to certain geometric problems in the Schubert calculus are actually real. Their proof is quite remarkable, using ideas from integrable systems, Fu ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. The theorem of Mukhin, Tarasov, and Varchenko (formerly the Shapiro conjecture for Grassmannians) asserts that all (a priori complex) solutions to certain geometric problems in the Schubert calculus are actually real. Their proof is quite remarkable, using ideas from integrable systems, Fuchsian differential equations, and representation theory. There is now a second proof of this result, and it has ramifications in other areas of mathematics, from curves to control theory to combinatorics. Despite this work, the original Shapiro conjecture is not yet settled. While it is false as stated, it has several interesting and not quite understood modifications and generalizations that are likely true, and the strongest and most subtle version of the Shapiro conjecture for Grassmannians remains open.
On Hilbert’s construction of positive polynomials
"... Abstract. In 1888, Hilbert described how to find real polynomials which take only nonnegative values but are not a sum of squares of polynomials. His construction was so restrictive that no explicit examples appeared until the late 1960s. We revisit and generalize Hilbert’s construction and present ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. In 1888, Hilbert described how to find real polynomials which take only nonnegative values but are not a sum of squares of polynomials. His construction was so restrictive that no explicit examples appeared until the late 1960s. We revisit and generalize Hilbert’s construction and present many such polynomials. 1. History and Overview A real polynomial f(x1,...,xn) is psd or positive if f(a) ≥ 0 for all a ∈ R n; it is sos or a sum of squares if there exist real polynomials hj so that f = ∑ h 2 j. For forms, we follow the notation of [4] and use Pn,m to denote the cone of real psd forms of even degree m in n variables, Σn,m to denote its subcone of sos forms and let ∆n,m = Pn,m � Σn,m. The Fundamental Theorem of Algebra implies that ∆2,m = ∅; ∆n,2 = ∅ follows from the diagonalization of psd quadratic forms. The first suggestion that a psd form might not be sos was made by Minkowski in the oral defense of his 1885 doctoral dissertation: Minkowski proposed the thesis that not every psd form is sos. Hilbert was one of his official “opponents ” and remarked that Minkowski’s arguments had convinced him that this thesis should be true for ternary forms. (See [14], [15] and [24].) Three years later, in a single remarkable paper, Hilbert [11] resolved the question. He first showed that F ∈ P3,4 is a sum of three squares of quadratic forms; see [23] and [26] for recent expositions and [17, 18] for another approach. Hilbert then described a construction of forms in ∆3,6 and ∆4,4; after multiplying these by powers of linear forms if necessary, it follows that ∆n,m ̸ = ∅ if n ≥ 3 and m ≥ 6 or n ≥ 4 and m ≥ 4. The goal of this paper is to isolate the underlying mechanism of Hilbert’s construction, show that it applies to situations more general than those in [11], and use it to produce many new examples. In [11], Hilbert first worked with polynomials in two variables, which homogenize to ternary forms. Suppose f1(x, y) and f2(x, y) are two relatively prime real cubic polynomials with nine distinct real common zeros – {πi}, indexed arbitrarily – so that
Convex geometry of orbits
 Combinatorial and Computational Geometry, Math. Sci. Res. Inst. Publ
, 2005
"... Abstract. We study metric properties of convex bodies B and their polars B ◦ , where B is the convex hull of an orbit under the action of a compact group G. Examples include the Traveling Salesman Polytope in polyhedral combinatorics (G = Sn, the symmetric group), the set of nonnegative polynomials ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. We study metric properties of convex bodies B and their polars B ◦ , where B is the convex hull of an orbit under the action of a compact group G. Examples include the Traveling Salesman Polytope in polyhedral combinatorics (G = Sn, the symmetric group), the set of nonnegative polynomials in real algebraic geometry (G = SO(n), the special orthogonal group), and the convex hull of the Grassmannian and the unit comass ball in the theory of calibrated geometries (G = SO(n), but with a different action). We compute the radius of the largest ball contained in the symmetric Traveling Salesman Polytope, give a reasonably tight estimate for the radius of the Euclidean ball containing the unit comass ball and review (sometimes with simpler and unified proofs) recent results on the structure of the set of nonnegative polynomials (the radius of the inscribed ball, volume estimates, and relations to the sums of squares). Our main tool is a new simple description of the ellipsoid of the largest volume contained in B ◦.
NONNEGATIVE POLYNOMIALS AND SUMS OF SQUARES
"... A real polynomial in n variables is called nonnegative if it is greater than or equal to 0 at all points in Rn. It is a central question in real algebraic geometry whether a nonnegative polynomial can be written in a way that makes its nonnegativity apparent, i.e. as a sum of squares of polynomials ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A real polynomial in n variables is called nonnegative if it is greater than or equal to 0 at all points in Rn. It is a central question in real algebraic geometry whether a nonnegative polynomial can be written in a way that makes its nonnegativity apparent, i.e. as a sum of squares of polynomials (or more general