Results 1  10
of
31
A New Efficient Algorithm for Computing Gröbner Bases Without Reduction to Zero (F5
 In: ISSAC ’02: Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation
, 2002
"... This paper introduces a new efficient algorithm for computing Gröbner bases. To avoid as much as possible intermediate computation, the algorithm computes successive truncated Gröbner bases and it replaces the classical polynomial reduction found in the Buchberger algorithm by the simultaneous reduc ..."
Abstract

Cited by 253 (54 self)
 Add to MetaCart
This paper introduces a new efficient algorithm for computing Gröbner bases. To avoid as much as possible intermediate computation, the algorithm computes successive truncated Gröbner bases and it replaces the classical polynomial reduction found in the Buchberger algorithm by the simultaneous reduction of several polynomials. This powerful reduction mechanism is achieved by means of a symbolic precomputation and by extensive use of sparse linear algebra methods. Current techniques in linear algebra used in Computer Algebra are reviewed together with other methods coming from the numerical field. Some previously untractable problems (Cyclic 9) are presented as well as an empirical comparison of a first implementation of this algorithm with other well known programs. This comparison pays careful attention to methodology issues. All the benchmarks and CPU times used in this paper are frequently updated and available on a Web page. Even though the new algorithm does not improve the worst case complexity it is several times faster than previous implementations both for integers and modulo computations. 1
Straightline programs in geometric elimination theory
 J. Pure Appl. Algebra
, 1998
"... Dedicated to Volker Strassen for his work on complexity We present a new method for solving symbolically zero–dimensional polynomial equation systems in the affine and toric case. The main feature of our method is the use of problem adapted data structures: arithmetic networks and straight–line prog ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
Dedicated to Volker Strassen for his work on complexity We present a new method for solving symbolically zero–dimensional polynomial equation systems in the affine and toric case. The main feature of our method is the use of problem adapted data structures: arithmetic networks and straight–line programs. For sequential time complexity measured by network size we obtain the following result: it is possible to solve any affine or toric zero–dimensional equation system in non–uniform sequential time which is polynomial in the length of the input description and the “geometric degree ” of the equation system. Here, the input is thought to be given by a straight–line program (or alternatively in sparse representation), and the length of the input is measured by number of variables, degree of equations and size of the program (or sparsity of the equations). The geometric degree of the input system has to be adequately defined. It is always bounded by the algebraic–combinatoric “Bézout number ” of the system which is given by the Hilbert function of a suitable homogeneous ideal. However, in many important cases, the value of the geometric
A New Criterion for Normal Form Algorithms
 Proc. AAECC, volume 1719 of LNCS
, 1999
"... In this paper, we present a new approach for computing normal forms in the quotient algebra A of a polynomial ring R by an ideal I. It is based on a criterion, which gives a necessary and sufficient condition for a projection onto a set of polynomials, to be a normal form modulo the ideal I. This cr ..."
Abstract

Cited by 46 (17 self)
 Add to MetaCart
In this paper, we present a new approach for computing normal forms in the quotient algebra A of a polynomial ring R by an ideal I. It is based on a criterion, which gives a necessary and sufficient condition for a projection onto a set of polynomials, to be a normal form modulo the ideal I. This criterion does not require any monomial ordering and generalizes the Buchberger criterion of Spolynomials. It leads to a new algorithm for constructing the multiplicative structure of a zerodimensional algebra. Described in terms of intrinsic operations on vector spaces in the ring of polynomials, this algorithm extends naturally to Laurent polynomials.
A SubdivisionBased Algorithm for the Sparse Resultant
 J. ACM
, 1999
"... Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra.
Polar varieties and efficient real elimination
 MATH. Z
, 2001
"... Let S0 be a smooth and compact real variety given by a reduced regular sequence of polynomials f1,..., fp. This paper is devoted to the algorithmic problem of finding efficiently a representative point for each connected component of S0. For this purpose we exhibit explicit polynomial equations th ..."
Abstract

Cited by 29 (12 self)
 Add to MetaCart
Let S0 be a smooth and compact real variety given by a reduced regular sequence of polynomials f1,..., fp. This paper is devoted to the algorithmic problem of finding efficiently a representative point for each connected component of S0. For this purpose we exhibit explicit polynomial equations that describe the generic polar varieties of S0. This leads to a procedure which solves our algorithmic problem in time that is polynomial in the (extrinsic) description length of the input equations f1,..., fp and in a suitably introduced, intrinsic geometric parameter, called the degree of the real interpretation of the given equation system f1,..., fp.
On the Complexity of Sparse Elimination
 J. Complexity
, 1996
"... Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minko ..."
Abstract

Cited by 28 (19 self)
 Add to MetaCart
Sparse elimination exploits the structure of a multivariate polynomial by considering its Newton polytope instead of its total degree. We concentrate on polynomial systems that generate zerodimensional ideals. A monomial basis for the coordinate ring is defined from a mixed subdivision of the Minkowski sum of the Newton polytopes. We offer a new and simple proof relying on the construction of a sparse resultant matrix, which leads to the computation of a multiplication map and all common zeros. The size of the monomial basis equals the mixed volume and its computation is equivalent to computing the mixed volume, so the latter is a measure of intrinsic complexity. On the other hand, our algorithms have worstcase complexity proportional to the volume of the Minkowski sum. In order to derive bounds in terms of the sparsity parameters, we establish new bounds on the Minkowski sum volume as a function of mixed volume. To this end, we prove a lower bound on mixed volume in terms of euclidea...
Decomposition plans for geometric constraint systems
 J. Symbolic Computation
, 2001
"... A central issue in dealing with geometric constraint systems for CAD/CAM/CAE is the generation of an optimal decomposition plan that not only aids efficient solution, but also captures design intent and supports conceptual design. Though complex, this issue has evolved and crystallized over the past ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
A central issue in dealing with geometric constraint systems for CAD/CAM/CAE is the generation of an optimal decomposition plan that not only aids efficient solution, but also captures design intent and supports conceptual design. Though complex, this issue has evolved and crystallized over the past few years, permitting us to take the next important step: in this paper, we formalize, motivate and explain the decomposition–recombination (DR)planning problem as well as several performance measures by which DRplanning algorithms can be analyzed and compared. These measures include: generality, validity, completeness, Church–Rosser property, complexity, best and worstchoice approximation factors, (strict) solvability preservation, ability to deal with underconstrained systems, and ability to incorporate conceptual design decompositions specified by the designer. The problem and several of the performance measures are formally defined here for the first time—they closely reflect specific requirements of CAD/CAM applications. The clear formulation of the problem and performance measures allow us to precisely analyze and compare existing DRplanners that use two wellknown types of decomposition methods: SR (constraint shape recognition) and MM (generalized maximum matching) on constraint graphs. This analysis additionally serves to illustrate and provide intuitive substance to the newly formalized measures. In Part II of this article, we use the new performance measures to guide the development of a new DRplanning algorithm which excels with respect to these performance measures. c ○ 2001 Academic Press 1.
Numerical Computation Of A Polynomial GCD And Extensions
, 1996
"... In the first part of this paper, we dene approximate polynomial gcds (greatest common divisors) and extended gcds provided that approximations to the zeros of the input polynomials are available. We relate our novel definition to the older and weaker ones, based on perturbation of the coefficients o ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
In the first part of this paper, we dene approximate polynomial gcds (greatest common divisors) and extended gcds provided that approximations to the zeros of the input polynomials are available. We relate our novel definition to the older and weaker ones, based on perturbation of the coefficients of the input polynomials, we demonstrate some deficiency of the latter definitions (which our denition avoids), and we propose new effective sequential and parallel (RNC and NC) algorithms for computing approximate gcds and extended gcds. Our stronger results are obtained with no increase of the asymptotic bounds on the computational cost. This is partly due to application of our recent nearly optimal algorithms for approximating polynomial zeros. In the second part of our paper, working under the older and more customary definition of approximate gcds, we modify and develop an alternative approach, which was previously based on the computation of the Singular Value Decomposition (SVD) of the associat...