Results 1  10
of
44
Sums of squares, moment matrices and optimization over polynomials
, 2008
"... We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory ..."
Abstract

Cited by 158 (10 self)
 Add to MetaCart
(Show Context)
We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NPhard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic/finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zerodimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.
Minimizing polynomials via sum of squares over the gradient ideal
 Math. Program
"... A method is proposed for finding the global minimum of a multivariate polynomial via sum of squares (SOS) relaxation over its gradient variety. That variety consists of all points where the gradient is zero and it need not be finite. A polynomial which is nonnegative on its gradient variety is shown ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
A method is proposed for finding the global minimum of a multivariate polynomial via sum of squares (SOS) relaxation over its gradient variety. That variety consists of all points where the gradient is zero and it need not be finite. A polynomial which is nonnegative on its gradient variety is shown to be SOS modulo its gradient ideal, provided the gradient ideal is radical or the polynomial is strictly positive on the gradient variety. This opens up the possibility of solving previously intractable polynomial optimization problems. The related problem of constrained minimization is also considered, and numerical examples are discussed. Experiments show that our method using the gradient variety outperforms prior SOS methods.
LMI techniques for optimization over polynomials in control: a survey
 IEEE Transactions on Automatic Control
"... Abstract—Numerous tasks in control systems involve optimization problems over polynomials, and unfortunately these problems are in general nonconvex. In order to cope with this difficulty, linear matrix inequality (LMI) techniques have been introduced because they allow one to obtain bounds to the ..."
Abstract

Cited by 31 (17 self)
 Add to MetaCart
(Show Context)
Abstract—Numerous tasks in control systems involve optimization problems over polynomials, and unfortunately these problems are in general nonconvex. In order to cope with this difficulty, linear matrix inequality (LMI) techniques have been introduced because they allow one to obtain bounds to the sought solution by solving convex optimization problems and because the conservatism of these bounds can be decreased in general by suitably increasing the size of the problems. This survey aims to provide the reader with a significant overview of the LMI techniques that are used in control systems for tackling optimization problems over polynomials, describing approaches such as decomposition in sum of squares, Positivstellensatz, theory of moments, Pólya’s theorem, and matrix dilation. Moreover, it aims to provide a collection of the essential problems in control systems where these LMI techniques are used, such as stability and performance investigations in nonlinear systems, uncertain systems, timedelay systems, and genetic regulatory networks. It is expected that this survey may be a concise useful reference for all readers.
A sum of squares approximation of nonnegative polynomials
 SIAM J. Optim
, 2006
"... Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real algebraic geometry; positive polynomials; sum of squares; semidefinite programming. AMS subject classifications. 12E05, 12Y05, 90C22 1. Introduction. The
approximations of nonnegative polynomials via simple high degree perturbations
 Math. Z
"... Abstract. We show that every real polynomial f nonnegative on [−1, 1] n can be approximated in the l1norm of coefficients, by a sequence of polynomials {fεr} that are sums of squares. This complements the existence of s.o.s. approximations in the denseness result of Berg, Christensen and Ressel, as ..."
Abstract

Cited by 27 (15 self)
 Add to MetaCart
(Show Context)
Abstract. We show that every real polynomial f nonnegative on [−1, 1] n can be approximated in the l1norm of coefficients, by a sequence of polynomials {fεr} that are sums of squares. This complements the existence of s.o.s. approximations in the denseness result of Berg, Christensen and Ressel, as we provide a very simple and explicit approximation sequence. Then we show that if the moment problem holds for a basic closed semialgebraic set KS ⊂ R n with nonempty interior, then every polynomial nonnegative on KS can be approximated in a similar fashion by elements from the corresponding preordering. Finally, we show that the degree of the perturbation in the approximating sequence depends on ɛ as well as the degree and the size of coefficients of the nonnegative polynomial f, but not on the specific values of its coefficients. 1.
Global optimization of polynomials using gradient tentacles and sums of squares
 SIAM Journal on Optimization
"... We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest low ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest lower bound of f on its gradient variety. Nie, Demmel and Sturmfels proved recently a theorem about the existence of sums of squares certificates for such lower bounds. Based on these certificates, they find arbitrarily tight relaxations of the original problem that can be formulated as semidefinite programs and thus be solved efficiently. We deal here with the more general case when f is bounded from below but does not necessarily attain a minimum. In this case, the method of Nie, Demmel and Sturmfels might yield completely wrong results. In order to overcome this problem, we replace the gradient variety by larger semialgebraic subsets of R n which we call gradient tentacles. It now gets substantially harder to prove the existence of the necessary sums of squares certificates.
Positive polynomials in scalar and matrix variables, the spectral theorem, and optimization
 , in vol. Structured Matrices and Dilations. A Volume Dedicated to the Memory of Tiberiu Constantinescu
"... We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second par ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second part of the survey focuses on recently discovered connections between real algebraic geometry and optimization as well as polynomials in matrix variables and some control theory problems. These new applications have prompted a series of recent studies devoted to the structure of positivity and convexity in a free ∗algebra, the appropriate setting for analyzing inequalities on polynomials having matrix variables. We sketch some of these developments, add to them and comment on the rapidly growing literature.
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Frontiers of reality in Schubert calculus
 Bulletin of the AMS
"... Abstract. The theorem of Mukhin, Tarasov, and Varchenko (formerly the Shapiro conjecture for Grassmannians) asserts that all (a priori complex) solutions to certain geometric problems in the Schubert calculus are actually real. Their proof is quite remarkable, using ideas from integrable systems, Fu ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The theorem of Mukhin, Tarasov, and Varchenko (formerly the Shapiro conjecture for Grassmannians) asserts that all (a priori complex) solutions to certain geometric problems in the Schubert calculus are actually real. Their proof is quite remarkable, using ideas from integrable systems, Fuchsian differential equations, and representation theory. There is now a second proof of this result, and it has ramifications in other areas of mathematics, from curves to control theory to combinatorics. Despite this work, the original Shapiro conjecture is not yet settled. While it is false as stated, it has several interesting and not quite understood modifications and generalizations that are likely true, and the strongest and most subtle version of the Shapiro conjecture for Grassmannians remains open.
NONNEGATIVE POLYNOMIALS AND SUMS OF SQUARES
"... A real polynomial in n variables is called nonnegative if it is greater than or equal to 0 at all points in Rn. It is a central question in real algebraic geometry whether a nonnegative polynomial can be written in a way that makes its nonnegativity apparent, i.e. as a sum of squares of polynomials ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
A real polynomial in n variables is called nonnegative if it is greater than or equal to 0 at all points in Rn. It is a central question in real algebraic geometry whether a nonnegative polynomial can be written in a way that makes its nonnegativity apparent, i.e. as a sum of squares of polynomials (or more general