Results 1  10
of
25
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Exact Certification of Global Optimality of Approximate Factorizations Via Rationalizing SumsOfSquares with Floating Point Scalars
, 2008
"... We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact rational identity. Our algorithms successfully certify accurate rational lower bounds near the irrational global optima for benchmark approximate polynomial greatest common divisors and multivariate polynomial irreducibility radii from the literature, and factor coefficient bounds in the setting of a model problem by Rump (up to n = 14, factor degree = 13). The numeric SOSes produced by the current fixed precision semidefinite programming (SDP) packages (SeDuMi, SOSTOOLS, YALMIP) are usually too coarse to allow successful projection to exact SOSes via Maple 11’s exact linear algebra. Therefore, before projection we refine the SOSes by rankpreserving Newton iteration. For smaller problems the starting SOSes for Newton can be guessed without SDP (“SDPfree SOS”), but for larger inputs we additionally appeal to sparsity techniques in our SDP formulation.
A prolongation–projection algorithm for computing the finite real . . .
 THEORETICAL COMPUTER SCIENCE
, 2009
"... ..."
Regularization methods for sum of squares relaxations in large scale polynomial optimization
 Department of Mathematics, University of California
, 2009
"... We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significantly larger problems. We first describe these methods for general conic semidefinite optimization, and then apply them to solve large scale polynomial optimization. Their efficiency is demonstrated by extensive numerical computations. In particular, a general dense quartic polynomial optimization with 100 variables would be solved on a regular computer, which is almost impossible by applying prior existing SOS solvers. Key words polynomial optimization, regularization methods, semidefinite programming, sum of squares, Lasserre’s relaxation AMS subject classification 65K05, 90C22 1
POSITIVITY AND OPTIMIZATION FOR SEMIALGEBRAIC FUNCTIONS
, 2009
"... We describe algebraic certificates of positivity for functions belonging to a finitely generated algebra of Borel measurable functions, with particular emphasis to algebras generated by semialgebraic functions. In which case the standard global optimization problem with constraints given by element ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We describe algebraic certificates of positivity for functions belonging to a finitely generated algebra of Borel measurable functions, with particular emphasis to algebras generated by semialgebraic functions. In which case the standard global optimization problem with constraints given by elements of the same algebra is reduced via a natural change of variables to the better understood case of polynomial optimization. A collection of simple examples and numerical experiments complement the theoretical parts of the article.
Solutions of Polynomial Systems derived from the Steady Cavity Flow Problem
, 2008
"... We propose a general algorithm to enumerate all solutions of a zerodimensional polynomial system with respect to a given cost function. The algorithm is developed and is used to study a polynomial system obtained by discretizing the steady cavity flow problem in two dimensions. The key technique ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We propose a general algorithm to enumerate all solutions of a zerodimensional polynomial system with respect to a given cost function. The algorithm is developed and is used to study a polynomial system obtained by discretizing the steady cavity flow problem in two dimensions. The key technique on which our algorithm is based is to solve polynomial optimization problems via sparse semidefinite programming relaxations (SDPR) [20], which has been adopted successfully to solve reactiondiffusion boundary value problems in [13]. The cost function to be minimized is derived from discretizing the fluid’s kinetic energy. The enumeration algorithm’s solutions are shown to converge to the minimal kinetic energy solutions for SDPR of increasing order. We demonstrate the algorithm with SDPR of first and second order on polynomial systems for different scenarios of the cavity flow problem and succeed in deriving the k smallest kinetic energy solutions. The question whether these solutions converge to solutions of the steady cavity flow problem is discussed, and we pose a conjecture for the minimal energy solution for increasing Reynolds number.
Polynomial shape from shading
 CVPR
"... We examine the shape from shading problem without boundary conditions as a polynomial system. This view allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative proc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We examine the shape from shading problem without boundary conditions as a polynomial system. This view allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative procedure with a new smoothness term that favors folds at edges. We use this numerical technique to inspect shading ambiguities. 1.
Randomization, Sums of Squares, NearCircuits, and Faster Real Root Counting
 CONTEMPORARY MATHEMATICS
"... Suppose that f is a real univariate polynomial of degree D with exactly 4 monomial terms. We present a deterministic algorithm of complexity polynomial in logD that, for most inputs, counts the number of real roots of f. The best previous algorithms have complexity superlinear in D. We also discuss ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose that f is a real univariate polynomial of degree D with exactly 4 monomial terms. We present a deterministic algorithm of complexity polynomial in logD that, for most inputs, counts the number of real roots of f. The best previous algorithms have complexity superlinear in D. We also discuss connections to sums of squares and Adiscriminants, including explicit obstructions to expressing positive definite sparse polynomials as sums of squares of few sparse polynomials. Our key theoretical tool is the introduction of efficiently computable chamber cones, which bound regions in coefficient space where the number of real roots of f can be computed easily. Much of our theory extends to nvariate(n+3)nomials.
A ”JOINT+MARGINAL ” APPROACH TO PARAMETRIC POLYNOMIAL OPTIMIZATION
, 905
"... Abstract. Given a compact parameter set Y ⊂ Rp, we consider polynomial optimization problems (Py) on Rn whose description depends on the parameter y ∈ Y. We assume that one can compute all moments of some probability measure ϕ on Y, absolutely continuous with respect to the Lebesgue measure (e.g. Y ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Given a compact parameter set Y ⊂ Rp, we consider polynomial optimization problems (Py) on Rn whose description depends on the parameter y ∈ Y. We assume that one can compute all moments of some probability measure ϕ on Y, absolutely continuous with respect to the Lebesgue measure (e.g. Y is a box or a simplex and ϕ is uniformly distributed). We then provide a hierarchy of semidefinite relaxations whose associated sequence of optimal solutions converges to the moment vector of a probability measure that encodes all information about all global optimal solutions x ∗ (y) of Py, as y ∈ Y. In particular, one may approximate as closely as desired any polynomial functional of the optimal solutions, like e.g. their ϕmean. In addition, using this knowledge on moments, the measurable function y ↦ → x ∗ k (y) of the kth coordinate of optimal solutions, can be estimated, e.g. by maximum entropy methods. Also, for a boolean variable xk, one may approximate as closely as desired its persistency ϕ({y: x ∗ k (y) = 1}, i.e. the probability that in an optimal solution x ∗ (y), the coordinate x ∗ k (y) takes the value 1. At last but not least, from an optimal solution of the dual semidefinite relaxations, one provides a sequence of polynomial (resp. piecewise polynomial) lower approximations with L1(ϕ) (resp. almost uniform) convergence to the optimal value function. 1.
unknown title
"... The notion of convexity underlies important results in many parts of mathematics such as optimization, analysis, combinatorics, probability and number theory. The geometric foundations of the theory of convex sets date back to work of Minkowski, Carathéodory, and Fenchel around 1900. Since then, thi ..."
Abstract
 Add to MetaCart
The notion of convexity underlies important results in many parts of mathematics such as optimization, analysis, combinatorics, probability and number theory. The geometric foundations of the theory of convex sets date back to work of Minkowski, Carathéodory, and Fenchel around 1900. Since then, this area has expanded into a large number of directions and now includes topics such as highdimensional spaces, convex analysis, polyhedral geometry, computational convexity, approximation methods and others. In the context of optimization, both theory and empirical evidence show that problems with convex constraints allow efficient algorithms. Many applications in the sciences and engineering involve optimization, and it is always extremely advantageous when the underlying feasible regions are convex and have practically useful representations as convex sets. A situation in which convexity has been wellunderstood is the study of convex polyhedra, which are the solution sets of finitely many linear inequalities [27, 86]. A context in algebraic geometry in which convexity arises is the theory of toric varieties. These are algebraic varieties derived from polyhedra [49, 73]. Both convex polyhedra and toric varieties have satisfactory computational techniques associated to them. Linear optimization over polyhedra is linear programming which admits interiorpoint algorithms that run in polynomial time. More generally, polyhedra can be