Results 1 - 10
of
22
Semidefinite representation of convex sets
, 2007
"... Let S = {x ∈ R n: g1(x) ≥ 0, · · · , gm(x) ≥ 0} be a semialgebraic set defined by multivariate polynomials gi(x). Assume S is compact, convex and has nonempty interior. Let Si = {x ∈ R n: gi(x) ≥ 0} and ∂Si = {x ∈ R n: gi(x) = 0} be its boundary. This paper, as does the subject of semidefin ..."
Abstract
-
Cited by 47 (10 self)
- Add to MetaCart
Let S = {x ∈ R n: g1(x) ≥ 0, · · · , gm(x) ≥ 0} be a semialgebraic set defined by multivariate polynomials gi(x). Assume S is compact, convex and has nonempty interior. Let Si = {x ∈ R n: gi(x) ≥ 0} and ∂Si = {x ∈ R n: gi(x) = 0} be its boundary. This paper, as does the subject of semidefinite programming (SDP), concerns Linear Matrix Inequalities (LMIs). The set S is said to have an LMI representation if it equals the set of solutions to some LMI and it is known that some convex S may not be LMI representable [6]. A question arising from [13], see [6, 14], is: given S ∈ R n, does there exist an LMI representable set ˆ S in some higher dimensional space R n+N whose projection down onto R n equals S. Such S is called semidefinite representable or SDP representable. This paper addresses the SDP representability problem. The following are the main contributions of this paper: (i) Assume gi(x) are all concave on S. If the positive definite Lagrange Hessian (PDLH) condition holds, i.e., the Hessian of the Lagrange function for optimization problem of minimizing any nonzero linear function ℓ T x on S is positive definite at the minimizer, then S is SDP representable. (ii) If each gi(x) is either sos-concave (− ∇ 2 gi(x) = W(x) T W(x) for some matrix polynomial W(x)) or strictly quasi-concave on S, then S is SDP representable. (iii) If each Si is either sos-convex or poscurv-convex (Si is compact, convex and has smooth boundary with positive curvature), then S is SDP representable. This also holds for Si for which ∂Si ∩ S extends smoothly to the boundary of a poscurv-convex set containing S. (iv) We give the complexity of Schmüdgen and Putinar’s matrix Positivstellensatz, which are critical to the proofs of (i)-(iii).
Optimality conditions and finite convergence of Lasserres hierarchy
- Mathematical Programming
, 2013
"... ar ..."
(Show Context)
Global optimization of polynomials using gradient tentacles and sums of squares
- SIAM Journal on Optimization
"... We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest low ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
(Show Context)
We consider the problem of computing the global infimum of a real polynomial f on R n. Every global minimizer of f lies on its gradient variety, i.e., the algebraic subset of R n where the gradient of f vanishes. If f attains a minimum on R n, it is therefore equivalent to look for the greatest lower bound of f on its gradient variety. Nie, Demmel and Sturmfels proved recently a theorem about the existence of sums of squares certificates for such lower bounds. Based on these certificates, they find arbitrarily tight relaxations of the original problem that can be formulated as semidefinite programs and thus be solved efficiently. We deal here with the more general case when f is bounded from below but does not necessarily attain a minimum. In this case, the method of Nie, Demmel and Sturmfels might yield completely wrong results. In order to overcome this problem, we replace the gradient variety by larger semialgebraic subsets of R n which we call gradient tentacles. It now gets substantially harder to prove the existence of the necessary sums of squares certificates.
An exact Jacobian SDP relaxation for polynomial optimization
- Mathematical Programming, Series A
"... Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new t ..."
Abstract
-
Cited by 21 (7 self)
- Add to MetaCart
(Show Context)
Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new type semidefinite programming (SDP) relaxation which is the first one for solving this problem exactly. First, we con-struct new polynomials ϕ1,..., ϕr, by using the Jacobian of f, hi, gj, such that the above problem is equivalent to min x∈Rn f(x) s.t. hi(x) = 0, ϕj(x) = 0, 1 ≤ i ≤ m1, 1 ≤ j ≤ r, g1(x) ν1 · · · gm2(x)νm2 ≥ 0, ∀ν ∈ {0, 1}m2. Second, we prove that for all N big enough, the standard N-th order Lasserre’s SDP relaxation is exact for solving this equivalent problem, that is, its optimal value is equal to fmin. Some variations and examples are also shown.
Lower bounds for polynomials using geometric programming
"... We make use of a result of Hurwitz and Reznick [8] [19], and a consequence of this result due to Fidalgo and Kovacec [5], to determine a new sufficient condition for a polynomial f ∈ R[X1,..., Xn] of even degree to be a sum of squares. This result generalizes a result of Lasserre in [10] and a res ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
We make use of a result of Hurwitz and Reznick [8] [19], and a consequence of this result due to Fidalgo and Kovacec [5], to determine a new sufficient condition for a polynomial f ∈ R[X1,..., Xn] of even degree to be a sum of squares. This result generalizes a result of Lasserre in [10] and a result of Fidalgo and Kovacec in [5], and it also generalizes the improvements of these results given in [6]. We apply this result to obtain a new lower bound fgp for f, and we explain how fgp can be computed using geometric programming. The lower bound fgp is generally not as good as the lower bound fsos introduced by Lasserre [11] and Parrilo and Sturmfels [15], which is computed using semidefinite programming, but a run time comparison shows that, in practice, the computation of fgp is much faster. The computation is simplest when the highest degree term of f has the form ∑ n i=1 aiX 2d i, ai> 0, i = 1,..., n. The lower bounds for f established in [6] are obtained by evaluating the objective function of the geometric program at the appropriate feasible points.
LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS
"... Abstract. We determine new sufficient conditions in terms of the coefficients for a polynomial f ∈ R[X] of degree 2d (d ≥ 1) in n ≥ 1 variables to be a sum of squares of polynomials, thereby strengthening results of Fidalgo and Kovacec [2] and of Lasserre [6]. Exploiting these results, we determine, ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract. We determine new sufficient conditions in terms of the coefficients for a polynomial f ∈ R[X] of degree 2d (d ≥ 1) in n ≥ 1 variables to be a sum of squares of polynomials, thereby strengthening results of Fidalgo and Kovacec [2] and of Lasserre [6]. Exploiting these results, we determine, for any polynomial f ∈ R[X] of degree 2d whose highest degree term is an interior point in the cone of sums of squares of forms of degree d, a real number r such that f − r is a sum of squares of polynomials. The existence of such a number r was proved earlier by Marshall [8], but no estimates for r were given. We also determine a lower bound for any polynomial f whose highest degree term is positive definite. 1.
Positive polynomials on unbounded equality-constrained domains
, 2011
"... domains ..."
(Show Context)
INVERSE POLYNOMIAL OPTIMIZATION
, 2012
"... Abstract. We consider the inverse optimization problem associated with the polynomial program f ∗ = min{f(x) : x ∈ K} and a given current feasible solution y ∈ K. We provide a systematic numerical scheme to compute an inverse optimal solution. That is, we compute a polynomial ˜ f (which may be of s ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We consider the inverse optimization problem associated with the polynomial program f ∗ = min{f(x) : x ∈ K} and a given current feasible solution y ∈ K. We provide a systematic numerical scheme to compute an inverse optimal solution. That is, we compute a polynomial ˜ f (which may be of same degree as f if desired) with the following properties: (a) y is a global minimizer of ˜ f on K with a Putinar’s certificate with an a priori degree bound d fixed, and (b), ˜ f minimizes ‖f − ˜ f ‖ (which can be the ℓ1, ℓ2 or ℓ∞-norm of the coefficients) over all polynomials with such properties. Computing ˜ fd reduces to solving a semidefinite program whose optimal value also provides a bound on how far is f(y) from the unknown optimal value f ∗. The size of the semidefinite program can be adapted to computational capabilities available. Moreover, if one uses the ℓ1-norm, then ˜ f takes a simple and explicit canonical form. Some variations are also discussed. 1.
REPRESENTATIONS OF NON-NEGATIVE POLYNOMIALS VIA THE CRITICAL IDEALS
"... Abstract. This paper studies the representations of a non-negative polynomial f on a non-compact semi-algebraic set K modulo its critical ideal. Under the assumption that the semi-algebraic set K is regular and f satisfies the boundary Hessian conditions (BHC) at each zero of f in K. We show that f ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. This paper studies the representations of a non-negative polynomial f on a non-compact semi-algebraic set K modulo its critical ideal. Under the assumption that the semi-algebraic set K is regular and f satisfies the boundary Hessian conditions (BHC) at each zero of f in K. We show that f can be represented as a sum of squares (SOS) of real polynomials modulo its critical ideal if f ≥ 0 on K. Particularly, we only work in the polynomial ring R[X]. 1. introduction We see that a polynomial in one variable f(X) ∈ R[X] satisfies f(X) ≥ 0, for all X ∈ R, then f(X) = ∑ m i=1 g2 i (X), where gi(X) ∈ R[X], i.e., f is a sum of squares in R[X] (SOS for short). However, in the multi-variables case, this is false. A counterexample was given by Motzkin in 1967. If f(X, Y) = 1+X 4 Y 2 +X 2 Y 4 − 3X 2 Y 2, then f(X, Y) ≥ 0, for all X, Y ∈ R. But f is not a SOS in R[X, Y]. To remedy that, we will consider the polynomials that are positive on K, where K is a