Results 1  10
of
80
Globally optimal estimates for geometric reconstruction problems
 In ICCV
, 2005
"... We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal c ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal costfunction. Our approach is based on a hierarchy of convex relaxations to solve nonconvex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum. 1
Convergent SDPRelaxations in Polynomial Optimization with Sparsity
 SIAM Journal on Optimization
"... Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxati ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxation of order r has the following two features: (a) The number of variables is O(κ 2r) where κ = max[κ1, κ2] witth κ1 (resp. κ2) being the maximum number of variables appearing the monomials of f (resp. appearing in a single constraint gj(X) ≥ 0). (b) The largest size of the LMI’s (Linear Matrix Inequalities) is O(κ r). This is to compare with the respective number of variables O(n 2r) and LMI size O(n r) in the original SDPrelaxations defined in [11]. Therefore, great computational savings are expected in case of sparsity in the data {gj, f}, i.e. when κ is small, a frequent case in practical applications of interest. The novelty with respect to [9] is that we prove convergence to the global optimum of P when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a byproduct, we also obtain a new representation result for polynomials positive on a basic closed semialgebraic set, a sparse version of Putinar’s Positivstellensatz [16]. 1.
Exploiting sparsity in SDP relaxation for sensor network localization
 SIAM J. Optim
, 2009
"... Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalen ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalent to the sparse SDP relaxation by Waki et al. with relaxation order 1, except the size and sparsity of the resulting SDP relaxation problems. We show that the sparse SDP relaxation applied to the QOP is at least as strong as the BiswasYe SDP relaxation for the sensor network localization problem. A sparse variant of the BiswasYe SDP relaxation, which is equivalent to the original BiswasYe SDP relaxation, is also derived. Numerical results are compared with the BiswasYe SDP relaxation and the edgebased SDP relaxation by Wang et al.. We show that the proposed sparse SDP relaxation is faster than the BiswasYe SDP relaxation. In fact, the computational efficiency in solving the resulting SDP problems increases as the number of anchors and/or the radio range grow. The proposed sparse SDP relaxation also provides more accurate solutions than the edgebased SDP relaxation when exact distances are given between sensors and anchors and there are only a small number of anchors. Key words. Sensor network localization problem, polynomial optimization problem, semidefinite relaxation, sparsity
Sum of squares methods for sensor network localization
, 2006
"... We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of t ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of this polynomial, we propose a new structured SOS relaxation, and discuss its various properties. When distances are given exactly, this SOS relaxation often returns true sensor locations. At each step of interior point methods solving this SOS relaxation, the complexity is O(n 3), where n is the number of sensors. When the distances have small perturbations, we show that the sensor locations given by this SOS relaxation are accurate within a constant factor of the perturbation error under some technical assumptions. The performance of this SOS relaxation is tested on some randomly generated problems.
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Exact Certification of Global Optimality of Approximate Factorizations Via Rationalizing SumsOfSquares with Floating Point Scalars
, 2008
"... We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact rational identity. Our algorithms successfully certify accurate rational lower bounds near the irrational global optima for benchmark approximate polynomial greatest common divisors and multivariate polynomial irreducibility radii from the literature, and factor coefficient bounds in the setting of a model problem by Rump (up to n = 14, factor degree = 13). The numeric SOSes produced by the current fixed precision semidefinite programming (SDP) packages (SeDuMi, SOSTOOLS, YALMIP) are usually too coarse to allow successful projection to exact SOSes via Maple 11’s exact linear algebra. Therefore, before projection we refine the SOSes by rankpreserving Newton iteration. For smaller problems the starting SOSes for Newton can be guessed without SDP (“SDPfree SOS”), but for larger inputs we additionally appeal to sparsity techniques in our SDP formulation.
A parallel primaldual interiorpoint method for semidefinite programs using positive definite matrix completion
 Parallel Computing
"... Abstract. A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Abstract. A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements of the Schur complement equation system and a parallel Cholesky factorization of its coefficient matrix. SDPARA can effectively solve SDPs with a large number of equality constraints, however, it does not solve SDPs with a large scale matrix variable with similar effectiveness. SDPAC is a primaldual interiorpoint method using the positive definite matrix completion technique by Fukuda et al, and it performs effectively with SDPs with a large scale matrix variable, but not with a large number of equality constraints. SDPARAC benefits from the strong performance of each of the two methods. Furthermore, SDPARAC is designed to attain a high scalability by considering most of the expensive computations involved in the primaldual interiorpoint method. Numerical experiments with the three parallel software packages SDPARAC, SDPARA and PDSDP by Benson show that SDPARAC efficiently solve SDPs with a large scale matrix variable as well as a large number of equality constraints with a small amount of memory.
A note on the representation of positive polynomials with structured sparsity
 ARCHIV DER MATHEMATIK 89(5):399–403
, 2006
"... We consider real polynomials in finitely many variables. Let the variables consist of finitely many blocks that are allowed to overlap in a certain way. Let the solution set of a finite system of polynomial inequalities be given where each inequality involves only variables of one block. We investig ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We consider real polynomials in finitely many variables. Let the variables consist of finitely many blocks that are allowed to overlap in a certain way. Let the solution set of a finite system of polynomial inequalities be given where each inequality involves only variables of one block. We investigate polynomials that are positive on such a set and sparse in the sense that each monomial involves only variables of one block. In particular, we derive a short and direct proof for Lasserre’s theorem on the existence of sums of squares certificates respecting the block structure. The motivation for the results can be found in the literature on numerical methods for global optimization of polynomials that exploit sparsity.
Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion
, 2010
"... ..."
Semidefinite optimization approaches for satisfiability and maximumsatisfiability problems
 J. Satisf. Bool. Model. Comput
"... Semidefinite optimization, commonly referred to as semidefinite programming, has been a remarkably active area of research in optimization during the last decade. For combinatorial problems in particular, semidefinite programming has had a truly significant impact. This paper surveys some of the res ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Semidefinite optimization, commonly referred to as semidefinite programming, has been a remarkably active area of research in optimization during the last decade. For combinatorial problems in particular, semidefinite programming has had a truly significant impact. This paper surveys some of the results obtained in the application of semidefinite programming to satisfiability and maximumsatisfiability problems. The approaches presented in some detail include the groundbreaking approximation algorithm of Goemans and Williamson for MAX2SAT, the Gap relaxation of de Klerk, van Maaren and Warners, and strengthenings of the Gap relaxation based on the Lasserre hierarchy of semidefinite liftings for polynomial optimization problems. We include theoretical and computational comparisons of the aforementioned semidefinite relaxations for the special case of 3SAT, and conclude with a review of the most recent results in the application of semidefinite programming to SAT and MAXSAT.