Results 1  10
of
19
On the complexity of Putinar’s Positivstellensatz
, 2008
"... Let S = {x ∈ R n  g1(x) ≥ 0,..., gm(x) ≥ 0} be a basic closed semialgebraic set defined by real polynomials gi. Putinar’s Positivstellensatz says that, under a certain condition stronger than compactness of S, every real polynomial f positive on S posesses a representation f = ∑ m i=0 σigi wher ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Let S = {x ∈ R n  g1(x) ≥ 0,..., gm(x) ≥ 0} be a basic closed semialgebraic set defined by real polynomials gi. Putinar’s Positivstellensatz says that, under a certain condition stronger than compactness of S, every real polynomial f positive on S posesses a representation f = ∑ m i=0 σigi where g0: = 1 and each σi is a sum of squares of polynomials. Such a representation is a certificate for the nonnegativity of f on S. We give a bound on the degrees of the terms σigi in this representation which depends on the description of S, the degree of f and a measure of how close f is to having a zero on S. As a consequence, we get information about the convergence rate of Lasserre’s procedure for optimization of a polynomial subject to polynomial constraints.
BiQuadratic Optimization over Unit Spheres and Semidefinite Programming Relaxations
, 2008
"... Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl
An exact Jacobian SDP relaxation for polynomial optimization
 Mathematical Programming, Series A
"... Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new t ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
(Show Context)
Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new type semidefinite programming (SDP) relaxation which is the first one for solving this problem exactly. First, we construct new polynomials ϕ1,..., ϕr, by using the Jacobian of f, hi, gj, such that the above problem is equivalent to min x∈Rn f(x) s.t. hi(x) = 0, ϕj(x) = 0, 1 ≤ i ≤ m1, 1 ≤ j ≤ r, g1(x) ν1 · · · gm2(x)νm2 ≥ 0, ∀ν ∈ {0, 1}m2. Second, we prove that for all N big enough, the standard Nth order Lasserre’s SDP relaxation is exact for solving this equivalent problem, that is, its optimal value is equal to fmin. Some variations and examples are also shown.
Regularization methods for sum of squares relaxations in large scale polynomial optimization
 Department of Mathematics, University of California
, 2009
"... We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significantly larger problems. We first describe these methods for general conic semidefinite optimization, and then apply them to solve large scale polynomial optimization. Their efficiency is demonstrated by extensive numerical computations. In particular, a general dense quartic polynomial optimization with 100 variables would be solved on a regular computer, which is almost impossible by applying prior existing SOS solvers. Key words polynomial optimization, regularization methods, semidefinite programming, sum of squares, Lasserre’s relaxation AMS subject classification 65K05, 90C22 1
Positive polynomials on unbounded equalityconstrained domains
, 2011
"... domains ..."
(Show Context)
CERTIFIED RELAXATION FOR POLYNOMIAL OPTIMIZATION ON SEMIALGEBRAIC SETS
, 2013
"... In this paper, we describe a relaxation method to compute the minimal critical value of a real polynomial function on a semialgebraic set S and the ideal defining the points at which the minimal critical value is reached. We show that any relaxation hierarchy which is the projection of the KarushK ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we describe a relaxation method to compute the minimal critical value of a real polynomial function on a semialgebraic set S and the ideal defining the points at which the minimal critical value is reached. We show that any relaxation hierarchy which is the projection of the KarushKuhnTucker relaxation stops in a finite number of steps and the ideal defining the minimizers is generated by the kernel of the associated moment matrix in that degree. Assuming the minimizer ideal is zerodimensional, we give a new criterion to detect when the minimum is reached and we prove that this criterion is satisfied for a sufficiently high degree. This exploits new representation of positive polynomials as elements of the preordering modulo the KKT ideal, which only involves polynomials in the initial set of variables.
PROBABILISTIC ALGORITHM FOR POLYNOMIAL OPTIMIZATION OVER A REAL ALGEBRAIC SET
, 2013
"... Let f,f1,...,fs be polynomials with rational coefficients in the indeterminates X = X1,...,Xn of maximum degree D and V be the set of common complex solutions of F = (f1,...,fs). We give an algorithm which, up to some regularity assumptions on F, computes an exact representation of the global infi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Let f,f1,...,fs be polynomials with rational coefficients in the indeterminates X = X1,...,Xn of maximum degree D and V be the set of common complex solutions of F = (f1,...,fs). We give an algorithm which, up to some regularity assumptions on F, computes an exact representation of the global infimum f ⋆ = inf x∈V∩Rnf (x), i.e. a univariate polynomial vanishing at f ⋆ and an isolating interval for f ⋆. Furthermore, this algorithm decides whether f ⋆ is reached and if so, it returns x ⋆ ∈ V ∩Rn such that f (x⋆) = f ⋆. This algorithm is probabilistic. It makes use of the notion of polar varieties. Its complexity is essentially cubic in (sD) n and linear in the complexity of evaluating the input. This fits within the best known deterministic complexity class DO(n). We report on some practical experiments of a first implementation that is available as a Maple package. It appears that it can tackle global optimization problems that were unreachable by previous exact algorithms and can manage instances that are hard to solve with purely numeric techniques. As far as we know, even under the extra genericity assumptions on the input, it is the first probabilistic algorithm that combines practical efficiency with good control of complexity for this problem.
REPRESENTATIONS OF NONNEGATIVE POLYNOMIALS VIA THE CRITICAL IDEALS
"... Abstract. This paper studies the representations of a nonnegative polynomial f on a noncompact semialgebraic set K modulo its critical ideal. Under the assumption that the semialgebraic set K is regular and f satisfies the boundary Hessian conditions (BHC) at each zero of f in K. We show that f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the representations of a nonnegative polynomial f on a noncompact semialgebraic set K modulo its critical ideal. Under the assumption that the semialgebraic set K is regular and f satisfies the boundary Hessian conditions (BHC) at each zero of f in K. We show that f can be represented as a sum of squares (SOS) of real polynomials modulo its critical ideal if f ≥ 0 on K. Particularly, we only work in the polynomial ring R[X]. 1. introduction We see that a polynomial in one variable f(X) ∈ R[X] satisfies f(X) ≥ 0, for all X ∈ R, then f(X) = ∑ m i=1 g2 i (X), where gi(X) ∈ R[X], i.e., f is a sum of squares in R[X] (SOS for short). However, in the multivariables case, this is false. A counterexample was given by Motzkin in 1967. If f(X, Y) = 1+X 4 Y 2 +X 2 Y 4 − 3X 2 Y 2, then f(X, Y) ≥ 0, for all X, Y ∈ R. But f is not a SOS in R[X, Y]. To remedy that, we will consider the polynomials that are positive on K, where K is a
REPRESENTATION OF NONNEGATIVE POLYNOMIALS VIA THE KKT IDEALS
"... Abstract. This paper studies the representation of a nonnegative polynomial f on a noncompact semialgebraic set K modulo its KKT (KarushKuhnTucker) ideal. Under the assumption that f satisfies the boundary Hessian conditions (BHC) at each zero of f in K; we show that f can be represented as a s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the representation of a nonnegative polynomial f on a noncompact semialgebraic set K modulo its KKT (KarushKuhnTucker) ideal. Under the assumption that f satisfies the boundary Hessian conditions (BHC) at each zero of f in K; we show that f can be represented as a sum of squares (SOS) of real polynomials modulo its KKT ideal if f ≥ 0 on K. 1. introduction We see that a polynomial in one variable f(x) ∈ R[x] and f(x) ≥ 0, for all x ∈ R, then f(x) = ∑m i=1 g2 i (x), where gi(x) ∈ R[x], i.e., f is a sum of squares in R[x] (SOS for short). However, in the multivariables case, this is false. A counterexample was given by Motzkin in 1967; if f(x, y) = 1 + x4y2 + x2y4 − 3x2y2, then f(x, y) ≥ 0, for all x, y ∈ R, but f is not a SOS in R[x, y]. To overcome this, we will consider the polynomials that are positive on K, where K is a semialgebraic set in Rn. For example, Schmüdgen’s famous theorem [17] says that
Convergence of the Lasserre Hierarchy of SDP Relaxations for Convex Polynomial Programs without Compactness
, 2013
"... The Lasserre hierarchy of semidefinite programming (SDP) relaxations is a powerful scheme for solving polynomial optimization problems with compact semialgebraic sets. In this paper, we show that, for convex polynomial optimization, the Lasserre hierarchy with a slightly extended quadratic module a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Lasserre hierarchy of semidefinite programming (SDP) relaxations is a powerful scheme for solving polynomial optimization problems with compact semialgebraic sets. In this paper, we show that, for convex polynomial optimization, the Lasserre hierarchy with a slightly extended quadratic module always converges asymptotically even in the case of noncompact semialgebraic feasible sets. We do this by exploiting a coercivity property of convex polynomials that are bounded below. We further establish that the positive definiteness of the Hessian of the associated Lagrangian at a saddlepoint (rather than the objective function at each minimizer) guarantees finite convergence of the hierarchy. We obtain finite convergence by first establishing a new sumofsquares polynomial representation of convex polynomials over convex semialgebraic sets under a saddlepoint condition.