Results 1  10
of
70
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in ..."
Abstract

Cited by 101 (7 self)
 Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Efficient incremental algorithms for the sparse resultant and the mixed volume
 J. Symbolic Computation
, 1995
"... We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. Th ..."
Abstract

Cited by 53 (9 self)
 Add to MetaCart
We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. The matrix determinant is a nontrivial multiple of the sparse resultant from which the sparse resultant itself can be recovered. The algorithm is incremental in the sense that successively larger matrices are constructed until one is found with the above properties. For multigraded systems, the new algorithm produces optimal matrices, i.e., expresses the sparse resultant as a single determinant. An implementation of the algorithm is described and experimental results are presented. In addition, we propose an efficient algorithm for computing the mixed volume of n polynomials in n variables. This computation provides an upper bound on the number of common isolated roots. A publicly available implementation of the algorithm is presented and empirical results are reported which suggest that it is the fastest mixed volume code to date.
Matrices in Elimination Theory
, 1997
"... The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in ..."
Abstract

Cited by 45 (17 self)
 Add to MetaCart
The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in a number of scientific and engineering applications. On the other hand, the Bezoutian reveals itself as an important tool in many areas connected to elimination theory and has its own merits, leading to new developments in effective algebraic geometry. This survey unifies the existing work on resultants, with emphasis on constructing matrices that generalize the classic matrices named after Sylvester, Bézout and Macaulay. The properties of the different matrix formulations are presented, including some complexity issues, with an emphasis on variable elimination theory. We compare toric resultant matrices to Macaulay's matrix and further conjecture the generalization of Macaulay's exact ratio...
Introduction to numerical algebraic geometry
 In Solving Polynomial Equations, Series: Algorithms and Computation in Mathematics
, 2005
"... by ..."
PHoM  a Polyhedral Homotopy Continuation Method for Polynomial Systems
 Computing
, 2003
"... PHoM is a software package in C++ for finding all isolated solutions of polynomial systems using a polyhedral homotopy continuation method. Among three modules constituting the package, the first module StartSystem constructs a family of polyhedrallinear homotopy functions, based on the polyhedral ..."
Abstract

Cited by 31 (13 self)
 Add to MetaCart
PHoM is a software package in C++ for finding all isolated solutions of polynomial systems using a polyhedral homotopy continuation method. Among three modules constituting the package, the first module StartSystem constructs a family of polyhedrallinear homotopy functions, based on the polyhedral homotopy theory, from input data for a given system of polynomial equations f (x) = 0. The second module CMPSc traces the solution curves of the homotopy equations to compute all isolated solutions of f (x) = 0. The third module Verify checks whether all isolated solutions of f (x) = 0 have been approximated correctly. We describe numerical methods used in each module and the usage of the package. Numerical results to demonstrate the performance of PHoM include some large polynomial systems that have not been solved previously.
On the Complexity of Sparse Elimination
 J. Complexity
, 1996
"... Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minko ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minkowski sum of the Newton polytopes. We offer a new and simple proof relying on the construction of a sparse resultant matrix, which leads to the computation of a multiplication map and all common zeros. The size of the monomial basis equals the mixed volume and its computation is equivalent to computing the mixed volume, so the latter is a measure of intrinsic complexity. On the other hand, our algorithms have worstcase complexity proportional to the volume of the Minkowski sum. In order to derive bounds in terms of the sparsity parameters, we establish new bounds on the Minkowski sum volume as a function of mixed volume. To this end, we prove a lower bound on mixed volume in terms of euclidea...
A Complete Implementation for Computing General Dimensional Convex Hulls
 INT. J. COMPUT. GEOM. APPL
, 1995
"... We study two important, and often complementary, issues in the implementation of geometric algorithms, namely exact arithmetic and degeneracy. We focus on integer arithmetic and propose a general and efficient method for its implementation based on modular arithmetic. We suggest that probabilistic ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
We study two important, and often complementary, issues in the implementation of geometric algorithms, namely exact arithmetic and degeneracy. We focus on integer arithmetic and propose a general and efficient method for its implementation based on modular arithmetic. We suggest that probabilistic modular arithmetic may be of wide interest, as it combines the advantages of modular arithmetic with randomization in order to speed up the lifting of residues to an integer. We derive general error bounds and discuss the implementation of this approach in our generaldimension convex hull program. The use of perturbations as a method to cope with input degeneracy is also illustrated. We present the implementation of a computationally efficient scheme that, moreover, greatly simplifies the task of programming. We concentrate on postprocessing, often perceived as the Achilles' heel of perturbations. Starting in the context of a specific application in robotics, we examine the complexity of p...
Numerical Evidence For A Conjecture In Real Algebraic Geometry
, 1998
"... Homotopies for polynomial systems provide computational evidence for a challenging instance of a conjecture about whether all solutions are real. The implementation of SAGBI homotopies involves polyhedral continuation, flat deformation and cheater's homotopy. The numerical difficulties are overcome ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Homotopies for polynomial systems provide computational evidence for a challenging instance of a conjecture about whether all solutions are real. The implementation of SAGBI homotopies involves polyhedral continuation, flat deformation and cheater's homotopy. The numerical difficulties are overcome if we work in the true synthetic spirit of the Schubert calculus by selecting the numerically most favorable equations to represent the geometric problem. Since a wellconditioned polynomial system allows perturbations on the input data without destroying the reality of the solutions we obtain not just one instance, but a whole manifold of systems that satisfy the conjecture. Also an instance that involves totally positive matrices has been verified. The optimality of the solving procedure is a promising first step towards the development of numerically stable algorithms for the pole placement problem in linear systems theory.
Solving Geometric Constraints By Homotopy
 IEEE Trans on Visualization and Computer Graphics
, 1996
"... Geometric modeling by constraints yields systems of equations. They are classically solved by NewtonRaphson's iteration, from a starting guess interactively provided by the designer. However, this method may fail to converge, or may converge to an unwanted solution after a `chaotic' behaviour. This ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Geometric modeling by constraints yields systems of equations. They are classically solved by NewtonRaphson's iteration, from a starting guess interactively provided by the designer. However, this method may fail to converge, or may converge to an unwanted solution after a `chaotic' behaviour. This paper claims that, in such cases, the homotopic method is much more satisfactory. 1 INTRODUCTION In CAD, geometric modeling by constraints enables users to describe geometric objects such as points, lines, circles, conics, B'ezier curves, etc in 2D and planes, quadrics, tori, B'ezier patches, etc in 3D, by geometric constraints, ie distances or angles between elements, incidence or tangency relations : : : This modeling yields large systems of equations, typically algebraic ones. The problem is then to solve such constraint systems. Since the seminal work of Sutherland [Sut63], a lot of research has been done on this topic. We roughly classify resolution methods for constraint systems in...