Results 1 - 10
of
26
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
- SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract
-
Cited by 23 (4 self)
- Add to MetaCart
(Show Context)
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Representation of non-negative polynomials, degree bounds and applications to optimization
, 2006
"... Natural sufficient conditions for a polynomial to have a local minimum at a point are considered. These conditions tend to hold with probability 1. It is shown that polynomials satisfying these conditions at each minimum point have nice presentations in terms of sums of squares. Applications are giv ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
(Show Context)
Natural sufficient conditions for a polynomial to have a local minimum at a point are considered. These conditions tend to hold with probability 1. It is shown that polynomials satisfying these conditions at each minimum point have nice presentations in terms of sums of squares. Applications are given to optimization on a compact set and also to global optimization. In many cases, there are degree bounds for such presentations. These bounds are of theoretical interest, but they appear to be too large to be of much practical use at present. In the final section, other more concrete degree bounds are obtained which ensure at least that the feasible set of solutions is not empty.
An exact Jacobian SDP relaxation for polynomial optimization
- Mathematical Programming, Series A
"... Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new t ..."
Abstract
-
Cited by 21 (7 self)
- Add to MetaCart
(Show Context)
Given polynomials f(x), gi(x), hj(x), we study how to minimize f(x) on the set S = {x ∈ Rn: h1(x) = · · · = hm1(x) = 0, g1(x) ≥ 0,..., gm2(x) ≥ 0}. Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new type semidefinite programming (SDP) relaxation which is the first one for solving this problem exactly. First, we con-struct new polynomials ϕ1,..., ϕr, by using the Jacobian of f, hi, gj, such that the above problem is equivalent to min x∈Rn f(x) s.t. hi(x) = 0, ϕj(x) = 0, 1 ≤ i ≤ m1, 1 ≤ j ≤ r, g1(x) ν1 · · · gm2(x)νm2 ≥ 0, ∀ν ∈ {0, 1}m2. Second, we prove that for all N big enough, the standard N-th order Lasserre’s SDP relaxation is exact for solving this equivalent problem, that is, its optimal value is equal to fmin. Some variations and examples are also shown.
An effective implementation of a symbolic-numeric cylindrical algebraic decomposition for quantifier elimination.
- In Proceedings of the 2009 International Workshop on Symbolic-Numeric Computation,
, 2009
"... ABSTRACT With many applications in engineering and in scientific fields, quantifier elimination (QE) has been attracting more attention these days. Cylindrical algebraic decomposition (CAD) is used as a basis for a general QE algorithm. We propose an effective symbolic-numeric cylindrical algebraic ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT With many applications in engineering and in scientific fields, quantifier elimination (QE) has been attracting more attention these days. Cylindrical algebraic decomposition (CAD) is used as a basis for a general QE algorithm. We propose an effective symbolic-numeric cylindrical algebraic decomposition (SNCAD) algorithm for solving polynomial optimization problems. The main ideas are a bounded CAD construction approach and utilization of sign information. The bounded CAD constructs CAD only in restricted admissible regions to remove redundant projection factors and avoid lifting cells where truth values are constant over the region. By utilization of sign information we can avoid symbolic computation in the lifting phase. Techniques for implementation are also presented. These techniques help reduce the computing time. We have examined our implementation by solving many example problems. Experimental results show that our implementation significantly improves efficiency compared to our previous work.
Global Optimization of Polynomials Using Generalized Critical Values and Sums of Squares
, 2010
"... Let ¯ X = [X1,..., Xn] and f ∈ R [ ¯ X]. We consider the problem of computing the global infimum of f when f is bounded below. For A ∈ GLn(C), we denote by f A the polynomial f(A ¯ X). Fix a number M ∈ R greater than infx∈Rn f(x). We prove that there exists a Zariski-closed subset A � GLn(C) such t ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
Let ¯ X = [X1,..., Xn] and f ∈ R [ ¯ X]. We consider the problem of computing the global infimum of f when f is bounded below. For A ∈ GLn(C), we denote by f A the polynomial f(A ¯ X). Fix a number M ∈ R greater than infx∈Rn f(x). We prove that there exists a Zariski-closed subset A � GLn(C) such that for all A ∈ GLn(Q) \ A, we have f A ≥ 0 on R n if and only if for all ɛ> 0, there exist sums of squares of polynomials s and t in R [ ¯ X] and polynomials φi ∈ R [ ¯ X] such that f A + ɛ = s + t ( M − f A) + ∑ A ∂f 1≤i≤n−1 φi. ∂Xi Hence we can formulate the original optimization problems as semidefinite programs which can be solved efficiently in Matlab. Some numerical experiments are given. We also discuss how to exploit the sparsity of SDP problems to overcome the ill-conditionedness of SDP problems when the infimum is not attained.
A new look at nonnegativity on closed sets and polynomial optimization
- SIAM J. Optim
, 2011
"... ar ..."
(Show Context)
NEW APPROXIMATIONS FOR THE CONE OF COPOSITIVE MATRICES AND ITS DUAL
"... Abstract. We provide convergent hierarchies for the convex cone C of copositive matrices and its dual C ∗ , the cone of completely positive matrices. In both cases the corresponding hierarchy consists of nested spectrahedra and provide outer (resp. inner) approximations for C (resp. for its dual C ∗ ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
Abstract. We provide convergent hierarchies for the convex cone C of copositive matrices and its dual C ∗ , the cone of completely positive matrices. In both cases the corresponding hierarchy consists of nested spectrahedra and provide outer (resp. inner) approximations for C (resp. for its dual C ∗), thus complementing previous inner (resp. outer) approximations for C (for C ∗). In particular, both inner and outer approximations have a very simple interpretation. Finally, extension to K-copositivity and K-complete positivity for a closed convex cone K, is straightforward. hal-00545755, version 2- 19 Jan 2012 1.
CERTIFIED RELAXATION FOR POLYNOMIAL OPTIMIZATION ON SEMI-ALGEBRAIC SETS
, 2013
"... In this paper, we describe a relaxation method to compute the minimal critical value of a real polynomial function on a semialgebraic set S and the ideal defining the points at which the minimal critical value is reached. We show that any relaxation hierarchy which is the projection of the Karush-K ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we describe a relaxation method to compute the minimal critical value of a real polynomial function on a semialgebraic set S and the ideal defining the points at which the minimal critical value is reached. We show that any relaxation hierarchy which is the projection of the Karush-Kuhn-Tucker relaxation stops in a finite number of steps and the ideal defining the minimizers is generated by the kernel of the associated moment matrix in that degree. Assuming the minimizer ideal is zerodimensional, we give a new criterion to detect when the minimum is reached and we prove that this criterion is satisfied for a sufficiently high degree. This exploits new representation of positive polynomials as elements of the preordering modulo the KKT ideal, which only involves polynomials in the initial set of variables.
PROBABILISTIC ALGORITHM FOR POLYNOMIAL OPTIMIZATION OVER A REAL ALGEBRAIC SET
, 2013
"... Let f,f1,...,fs be polynomials with rational coefficients in the indeterminates X = X1,...,Xn of maximum degree D and V be the set of common complex solutions of F = (f1,...,fs). We give an algorithm which, up to some regularity assumptions on F, computes an exact representation of the global infi ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
Let f,f1,...,fs be polynomials with rational coefficients in the indeterminates X = X1,...,Xn of maximum degree D and V be the set of common complex solutions of F = (f1,...,fs). We give an algorithm which, up to some regularity assumptions on F, computes an exact representation of the global infimum f ⋆ = inf x∈V∩Rnf (x), i.e. a univariate polynomial vanishing at f ⋆ and an isolating interval for f ⋆. Furthermore, this algorithm decides whether f ⋆ is reached and if so, it returns x ⋆ ∈ V ∩Rn such that f (x⋆) = f ⋆. This algorithm is probabilistic. It makes use of the notion of polar varieties. Its complexity is essentially cubic in (sD) n and linear in the complexity of evaluating the input. This fits within the best known deterministic complexity class DO(n). We report on some practical experiments of a first implementation that is available as a Maple package. It appears that it can tackle global optimization problems that were unreachable by previous exact algorithms and can manage instances that are hard to solve with purely numeric techniques. As far as we know, even under the extra genericity assumptions on the input, it is the first probabilistic algorithm that combines practical efficiency with good control of complexity for this problem.