Results 1  10
of
50
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Numerical Homotopies to compute generic Points on positive dimensional Algebraic Sets
 Journal of Complexity
, 1999
"... Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for fourbar mechanisms) that are challenging to compute numerically by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the com ..."
Abstract

Cited by 50 (24 self)
 Add to MetaCart
Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for fourbar mechanisms) that are challenging to compute numerically by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the components with linear subspaces in general position to obtain generic points of the components as the isolated solutions of an auxiliary system. Since this requires the solution of a number of larger overdetermined systems, the procedure is computationally expensive and also wasteful because many solution paths diverge. In this article an embedding of the original polynomial system is presented, which leads to a sequence of homotopies, with solution paths leading to generic points of all components as the isolated solutions of an auxiliary system. The new procedure significantly reduces the number of paths to solutions that need to be followed. This approach has been implemented and applied to...
Matrices in Elimination Theory
, 1997
"... The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in ..."
Abstract

Cited by 45 (17 self)
 Add to MetaCart
The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in a number of scientific and engineering applications. On the other hand, the Bezoutian reveals itself as an important tool in many areas connected to elimination theory and has its own merits, leading to new developments in effective algebraic geometry. This survey unifies the existing work on resultants, with emphasis on constructing matrices that generalize the classic matrices named after Sylvester, Bézout and Macaulay. The properties of the different matrix formulations are presented, including some complexity issues, with an emphasis on variable elimination theory. We compare toric resultant matrices to Macaulay's matrix and further conjecture the generalization of Macaulay's exact ratio...
Certified approximate univariate GCDs
 METHODS IN ALGEBRAIC GEOMETRY, 117 & 118:229251
, 1997
"... We study the approximate GCD of two univariate polynomials given with limited accuracy or, equivalently, the exact GCD of the perturbed polynomials within some prescribed tolerance. A perturbed polynomial is regarded as a family of polynomials in a classification space, which leads to an accurate an ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
We study the approximate GCD of two univariate polynomials given with limited accuracy or, equivalently, the exact GCD of the perturbed polynomials within some prescribed tolerance. A perturbed polynomial is regarded as a family of polynomials in a classification space, which leads to an accurate analysis of the computation. Considering only the Sylvester matrix singular values, as is frequently suggested in the literature, does not suffice to solve the problem completely, even when the extended euclidean algorithm is also used. We provide a counterexample that illustrates this claim and indicates the problem's hardness. SVD computations on subresultant matrices lead to upper bounds on the degree of the approximate GCD. Further use of the subresultant matrices singular values yields an approximate syzygy of the given polynomials, which is used to establish a gap theorem on certain singular values that certifies the maximumdegree approximate GCD. This approach leads directly to an algorithm for computing the approximate GCD polynomial. Lastly, we suggest the use of weighted norms in order to sharpen the theorem's conditions in a more intrinsic context.
Using monodromy to decompose solution sets of polynomial systems into irreducible components
 PROCEEDINGS OF A NATO CONFERENCE, FEBRUARY 25  MARCH 1, 2001, EILAT
, 2001
"... ..."
A SubdivisionBased Algorithm for the Sparse Resultant
 J. ACM
, 1999
"... Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra.
Camera pose and calibration from 4 or 5 known 3D points
 In Proc. 7th Int. Conf. on Computer Vision
, 1999
"... We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibratio ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in nondegenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.
Computer Algebra Methods for Studying and Computing Molecular Conformations
, 1997
"... A relatively new branch of computational biology has been emerging as an effort to supplement traditional techniques of large scale search in drug design by structurebased methods, in order to improve efficiency and guarantee completeness. This paper studies the geometric structure of cyclic molecu ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
A relatively new branch of computational biology has been emerging as an effort to supplement traditional techniques of large scale search in drug design by structurebased methods, in order to improve efficiency and guarantee completeness. This paper studies the geometric structure of cyclic molecules, in particular the enumeration of all possible conformations, which is crucial in finding the energetically favorable geometries, and the identification of all degenerate conformations. Recent advances in computational algebra are exploited, including distance geometry, sparse polynomial theory, and matrix methods for numerically solving nonlinear multivariate polynomial systems. Moreover, we propose a complete array of computer algebra and symbolic computational geometry methods for modeling the rigidity constraints, formulating the problems in algebraic terms and, lastly, visualizing the computed conformations. The use of computer algebra systems and of public domain software is illustrated...
On the Complexity of Sparse Elimination
 J. Complexity
, 1996
"... Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minko ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minkowski sum of the Newton polytopes. We offer a new and simple proof relying on the construction of a sparse resultant matrix, which leads to the computation of a multiplication map and all common zeros. The size of the monomial basis equals the mixed volume and its computation is equivalent to computing the mixed volume, so the latter is a measure of intrinsic complexity. On the other hand, our algorithms have worstcase complexity proportional to the volume of the Minkowski sum. In order to derive bounds in terms of the sparsity parameters, we establish new bounds on the Minkowski sum volume as a function of mixed volume. To this end, we prove a lower bound on mixed volume in terms of euclidea...
The Structure of Sparse Resultant Matrices
 In Proc. ACM Intern. Symp. on Symbolic and Algebraic Computation
, 1997
"... Resultants characterize the existence of roots of systems of multivariate nonlinear polynomial equations, while their matrices reduce the computation of all common zeros to a problem in linear algebra. Sparse elimination theory has introduced the sparse resultant, which takes into account the sparse ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
Resultants characterize the existence of roots of systems of multivariate nonlinear polynomial equations, while their matrices reduce the computation of all common zeros to a problem in linear algebra. Sparse elimination theory has introduced the sparse resultant, which takes into account the sparse structure of the polynomials. The construction of sparse resultant, or Newton, matrices is a critical step in the computation of the resultant and the solution of the system. We exploit the matrix structure and decrease the time complexity of constructing such matrices to roughly quadratic in the matrix dimension, whereas the previous methods had cubic complexity. The space complexity is also decreased by one order of magnitude. These results imply similar improvements in the complexity of computing the resultant itself and of solving zerodimensional systems. We apply some novel techniques for determining the rank of rectangular matrices by an exact or numerical computation. Finally, we im...