Results 1  10
of
14
Optimization Strategies for the Approximate GCD Problem
 IN PROC. ISSAC'98
, 1998
"... We describe algorithms for computing the greatest common divisor (GCD) of two univariate polynomials with inexactlyknown coefficients. Assuming that an estimate for the GCD degree is available (e.g., using an SVDbased algorithm), we formulate and solve a nonlinear optimization problem in order to d ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We describe algorithms for computing the greatest common divisor (GCD) of two univariate polynomials with inexactlyknown coefficients. Assuming that an estimate for the GCD degree is available (e.g., using an SVDbased algorithm), we formulate and solve a nonlinear optimization problem in order to determine the coefficients of the "best" GCD. We discuss various issues related to the implementation of the algorithms and present some preliminary test results.
Exact Certification of Global Optimality of Approximate Factorizations Via Rationalizing SumsOfSquares with Floating Point Scalars
, 2008
"... We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact rational identity. Our algorithms successfully certify accurate rational lower bounds near the irrational global optima for benchmark approximate polynomial greatest common divisors and multivariate polynomial irreducibility radii from the literature, and factor coefficient bounds in the setting of a model problem by Rump (up to n = 14, factor degree = 13). The numeric SOSes produced by the current fixed precision semidefinite programming (SDP) packages (SeDuMi, SOSTOOLS, YALMIP) are usually too coarse to allow successful projection to exact SOSes via Maple 11’s exact linear algebra. Therefore, before projection we refine the SOSes by rankpreserving Newton iteration. For smaller problems the starting SOSes for Newton can be guessed without SDP (“SDPfree SOS”), but for larger inputs we additionally appeal to sparsity techniques in our SDP formulation.
Approximate greatest common divisors of several polynomials with linearly constrained coefficients and singular polynomials
 Manuscript
, 2006
"... We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the defo ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the deformed coefficients by a given set of linear constraints, thus introducing the linearly constrained approximate GCD problem. We present an algorithm based on a version of the structured total least norm (STLN) method and demonstrate, on a diverse set of benchmark polynomials, that the algorithm in practice computes globally minimal approximations. As an application of the linearly constrained approximate GCD problem, we present an STLNbased method that computes for a real or complex polynomial the nearest real or complex polynomial that has a root of multiplicity at least k. We demonstrate that the algorithm in practice computes, on the benchmark polynomials given in the literature, the known globally optimal nearest singular polynomials. Our algorithms can handle, via randomized preconditioning, the difficult case when the nearest solution to a list of real input polynomials actually has nonreal complex coefficients.
Displacement structure in computing approximate GCD of univariate polynomials
 In Proc. Sixth Asian Symposium on Computer Mathematics (ASCM 2003
, 2003
"... We propose a fast algorithm for computing approximate GCD of univariate polynomials with coefficients that are given only to a finite accuracy. The algorithm is based on a stabilized version of the generalized Schur algorithm for Sylvester matrix and its embedding. All computations can be done in O( ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We propose a fast algorithm for computing approximate GCD of univariate polynomials with coefficients that are given only to a finite accuracy. The algorithm is based on a stabilized version of the generalized Schur algorithm for Sylvester matrix and its embedding. All computations can be done in O(n 2) operations, where n is the sum of the degrees of polynomials. The stability of the algorithm is also discussed. 1.
Global minimization of rational functions and the nearest GCDs
 J. of Global Optimization
"... This paper discusses the global minimization of rational functions with or without constraints. The sum of squares (SOS) relaxations are proposed to find the global minimum and minimizers. Some special features of the SOS relaxations are studied. As an application, we show how to find the nearest co ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper discusses the global minimization of rational functions with or without constraints. The sum of squares (SOS) relaxations are proposed to find the global minimum and minimizers. Some special features of the SOS relaxations are studied. As an application, we show how to find the nearest common divisors of polynomials via global minimization of rational functions.
Fast Singular Value Thresholding without Singular Value Decomposition
"... Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the s ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the singular values. However, such an approach is timeconsuming under some circumstances, especially when the rank of the resulting matrix is not significantly low compared to its dimension. In this paper, we propose a fast algorithm for directly computing SVT for general dense matrices without usingSVDs. Ouralgorithm isbasedonmatrixNewtoniteration for matrixfunctions, andtheconvergence is theoretically guaranteed. Numerical experiments show that our proposed algorithm is more efficient than the SVDbased approaches for general dense matrices. 1
Computing the radius of positive semidefiniteness of a multivariate real polynomial via a dual of Seidenberg’s method
, 2010
"... ..."
Computing approximate GCD of univariate polynomials by structured total least norm
 Institute of Systems Science, AMSS, Academia Sinica
, 2004
"... Abstract. The problem of approximating the greatest common divisor(GCD) for polynomials with inexact coefficients can be formulated as a low rank approximation problem with Sylvester matrix. This paper presents a method based on Structured Total Least Norm(STLN) for constructing the nearest Sylveste ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. The problem of approximating the greatest common divisor(GCD) for polynomials with inexact coefficients can be formulated as a low rank approximation problem with Sylvester matrix. This paper presents a method based on Structured Total Least Norm(STLN) for constructing the nearest Sylvester matrix of given lower rank. We present algorithms for computing the nearest GCD and a certified ɛGCD for a given tolerance ɛ. The running time of our algorithm is polynomial in the degrees of polynomials. We also show the performance of the algorithms on a set of univariate polynomials.
The Nearest Polynomial with a given zero, Revisited
, 2005
"... In his 1999 Sigsam Bulletin paper [7], H. J. Stetter gave an explicit formula for finding the nearest polynomial with a given zero. This present paper revisits the issue, correcting a minor omission from Stetter’s formula and explicitly extending the results to different polynomial bases. Experiment ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In his 1999 Sigsam Bulletin paper [7], H. J. Stetter gave an explicit formula for finding the nearest polynomial with a given zero. This present paper revisits the issue, correcting a minor omission from Stetter’s formula and explicitly extending the results to different polynomial bases. Experiments with our implementation demonstrate that the formula may not after all, fully solve the problem, and we discuss some outstanding issues: first, that the nearest polynomial with the given zero may be identically zero (which might be surprising), and, second, that the problem of finding the nearest polynomial of the same degree with a given zero may not, in fact, have a solution. A third variant of the problem, namely to find the nearest monic polynomial (given a monic polynomial initially) with a given zero, a problem that makes sense in some polynomial bases but not others, can also be solved with Stetter’s formula, and this may be more satisfactory in some circumstances. This last can be generalized to the case where some coefficients are intrinsic and not to be changed, whereas others are empiric and may safely be changed. Of course, this minor generalization is implicit in [7]; This paper 1 simply makes it explicit.