Results 1  10
of
81
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists ..."
Abstract

Cited by 110 (7 self)
 Add to MetaCart
(Show Context)
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Efficient incremental algorithms for the sparse resultant and the mixed volume
 J. Symbolic Computation
, 1995
"... We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. Th ..."
Abstract

Cited by 54 (8 self)
 Add to MetaCart
We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. The matrix determinant is a nontrivial multiple of the sparse resultant from which the sparse resultant itself can be recovered. The algorithm is incremental in the sense that successively larger matrices are constructed until one is found with the above properties. For multigraded systems, the new algorithm produces optimal matrices, i.e., expresses the sparse resultant as a single determinant. An implementation of the algorithm is described and experimental results are presented. In addition, we propose an efficient algorithm for computing the mixed volume of n polynomials in n variables. This computation provides an upper bound on the number of common isolated roots. A publicly available implementation of the algorithm is presented and empirical results are reported which suggest that it is the fastest mixed volume code to date.
Matrices in Elimination Theory
, 1997
"... The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in ..."
Abstract

Cited by 52 (16 self)
 Add to MetaCart
The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in a number of scientific and engineering applications. On the other hand, the Bezoutian reveals itself as an important tool in many areas connected to elimination theory and has its own merits, leading to new developments in effective algebraic geometry. This survey unifies the existing work on resultants, with emphasis on constructing matrices that generalize the classic matrices named after Sylvester, Bézout and Macaulay. The properties of the different matrix formulations are presented, including some complexity issues, with an emphasis on variable elimination theory. We compare toric resultant matrices to Macaulay's matrix and further conjecture the generalization of Macaulay's exact ratio...
Introduction to numerical algebraic geometry
 In Solving Polynomial Equations, Series: Algorithms and Computation in Mathematics
, 2005
"... by ..."
(Show Context)
PHoM  a Polyhedral Homotopy Continuation Method for Polynomial Systems
 Computing
, 2003
"... PHoM is a software package in C++ for finding all isolated solutions of polynomial systems using a polyhedral homotopy continuation method. Among three modules constituting the package, the first module StartSystem constructs a family of polyhedrallinear homotopy functions, based on the polyhedral ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
(Show Context)
PHoM is a software package in C++ for finding all isolated solutions of polynomial systems using a polyhedral homotopy continuation method. Among three modules constituting the package, the first module StartSystem constructs a family of polyhedrallinear homotopy functions, based on the polyhedral homotopy theory, from input data for a given system of polynomial equations f (x) = 0. The second module CMPSc traces the solution curves of the homotopy equations to compute all isolated solutions of f (x) = 0. The third module Verify checks whether all isolated solutions of f (x) = 0 have been approximated correctly. We describe numerical methods used in each module and the usage of the package. Numerical results to demonstrate the performance of PHoM include some large polynomial systems that have not been solved previously.
On the Complexity of Sparse Elimination
 J. Complexity
, 1996
"... Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minko ..."
Abstract

Cited by 28 (16 self)
 Add to MetaCart
(Show Context)
Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minkowski sum of the Newton polytopes. We offer a new and simple proof relying on the construction of a sparse resultant matrix, which leads to the computation of a multiplication map and all common zeros. The size of the monomial basis equals the mixed volume and its computation is equivalent to computing the mixed volume, so the latter is a measure of intrinsic complexity. On the other hand, our algorithms have worstcase complexity proportional to the volume of the Minkowski sum. In order to derive bounds in terms of the sparsity parameters, we establish new bounds on the Minkowski sum volume as a function of mixed volume. To this end, we prove a lower bound on mixed volume in terms of euclidea...
D.: Solving geometric constraints by homotopy
 Proc. Symp. on Solid Modeling and Applications
, 1995
"... ..."
A Complete Implementation for Computing General Dimensional Convex Hulls
 INT. J. COMPUT. GEOM. APPL
, 1995
"... We study two important, and often complementary, issues in the implementation of geometric algorithms, namely exact arithmetic and degeneracy. We focus on integer arithmetic and propose a general and efficient method for its implementation based on modular arithmetic. We suggest that probabilistic ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
We study two important, and often complementary, issues in the implementation of geometric algorithms, namely exact arithmetic and degeneracy. We focus on integer arithmetic and propose a general and efficient method for its implementation based on modular arithmetic. We suggest that probabilistic modular arithmetic may be of wide interest, as it combines the advantages of modular arithmetic with randomization in order to speed up the lifting of residues to an integer. We derive general error bounds and discuss the implementation of this approach in our generaldimension convex hull program. The use of perturbations as a method to cope with input degeneracy is also illustrated. We present the implementation of a computationally efficient scheme that, moreover, greatly simplifies the task of programming. We concentrate on postprocessing, often perceived as the Achilles' heel of perturbations. Starting in the context of a specific application in robotics, we examine the complexity of p...