Results 1  10
of
36
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 45 (17 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Approximate Factorization of Multivariate Polynomials via Differential Equations
 Manuscript
, 2004
"... The input to our algorithm is a multivariate polynomial, whose complex rational coe#cients are considered imprecise with an unknown error that causes f to be irreducible over the complex numbers C. We seek to perturb the coe#cients by a small quantitity such that the resulting polynomial factors ove ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
The input to our algorithm is a multivariate polynomial, whose complex rational coe#cients are considered imprecise with an unknown error that causes f to be irreducible over the complex numbers C. We seek to perturb the coe#cients by a small quantitity such that the resulting polynomial factors over C. Ideally, one would like to minimize the perturbation in some selected distance measure, but no e#cient algorithm for that is known. We give a numerical multivariate greatest common divisor algorithm and use it on a numerical variant of algorithms by W. M. Ruppert and S. Gao. Our numerical factorizer makes repeated use of singular value decompositions. We demonstrate on a significant body of experimental data that our algorithm is practical and can find factorizable polynomials within a distance that is about the same in relative magnitude as the input error, even when the relative error in the input is substantial (10 3 ).
Efficient Computation of Minimal Polynomials in Algebraic Extensions of Finite Fields
 In Proceedings of the 1999 International Symposium on Symbolic and Algebraic Computation (Vancouver, BC
, 1999
"... New algorithms are presented for computing the minimal polynomial over a finite field K of a given element in an algebraic extension of K of the form K[ff] or K[ff][fi]. The new algorithms are explicit and can be implemented rather easily in terms of polynomial multiplication, and are much more effi ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
New algorithms are presented for computing the minimal polynomial over a finite field K of a given element in an algebraic extension of K of the form K[ff] or K[ff][fi]. The new algorithms are explicit and can be implemented rather easily in terms of polynomial multiplication, and are much more efficient than other algorithms in the literature. 1 Introduction In this paper, we consider the problem of computing the minimal polynomial over a finite field K of a given element oe in an algebraic extension of K of the form K[ff] or K[ff][fi]. The minimal polynomial of oe is defined to be the unique monic polynomial OE oe=K 2 K[x] of least degree such that OE oe=K (oe) = 0. In the first case, we assume that the ring K[ff] is given as K[x]=(f) where f 2 K[x] is a monic polynomial of degree n, and that elements in K[ff] are represented in the natural way as elements of K[x] !n (the set of polynomials of degree less than n). Similarly, in the second case, we assume that K[ff] is given as a...
Linear recurrences with polynomial coefficients and computation of the CartierManin operator on hyperelliptic curves
 In International Conference on Finite Fields and Applications (Toulouse
, 2004
"... Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of h ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of hyperelliptic curves.
Efficient Matrix Preconditioners for Black Box Linear Algebra
 LINEAR ALGEBRA AND APPLICATIONS 343–344 (2002), 119–146. SPECIAL ISSUE ON STRUCTURED AND INFINITE SYSTEMS OF LINEAR EQUATIONS
, 2001
"... The main idea of the "black box" approach in exact linear algebra is to reduce matrix problems to the computation of minimum polynomials. In most cases preconditioning is necessary to obtain the desired result. Here, good preconditioners will be used to ensure geometrical / algebraic properties on m ..."
Abstract

Cited by 21 (15 self)
 Add to MetaCart
The main idea of the "black box" approach in exact linear algebra is to reduce matrix problems to the computation of minimum polynomials. In most cases preconditioning is necessary to obtain the desired result. Here, good preconditioners will be used to ensure geometrical / algebraic properties on matrices, rather than numerical ones, so we do not address a condition number. We o#er a review of problems for which (algebraic) preconditioning is used, provide a bestiary of preconditioning problems, and discuss several preconditioner types to solve these problems. We present new conditioners, including conditioners to preserve low displacement rank for Toeplitzlike matrices. We also provide new analyses of preconditioner performance and results on the relations among preconditioning problems and with linear algebra problems. Thus improvements are offered for the e#ciency and applicability of preconditioners. The focus is on linear algebra problems over finite fields, but most results are valid for entries from arbitrary fields.
Towards Factoring Bivariate Approximate Polynomials
"... A new algorithm is presented for factoring bivariate approximate polynomials over C [x, y]. Given a particular polynomial, the method constructs a nearby composite polynomial, if one exists, and its irreducible factors. Subject to a conjecture, the time to produce the factors is polynomial in the de ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
A new algorithm is presented for factoring bivariate approximate polynomials over C [x, y]. Given a particular polynomial, the method constructs a nearby composite polynomial, if one exists, and its irreducible factors. Subject to a conjecture, the time to produce the factors is polynomial in the degree of the problem. This method has been implemented in Maple, and has been demonstrated to be efficient and numerically robust.
On Approximate Irreducibility of Polynomials in Several Variables
"... We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We study the problem of bounding a polynomial away from polynomials which are absolutely irreducible. Such separation bounds are useful for testing whether a numerical polynomial is absolutely irreducible, given a certain tolerance on its coefficients. Using an absolute irreducibility criterion due to Ruppert, we are able to find useful separation bounds, in several norms, for bivariate polynomials. We also use Ruppert's criterion to derive new, more effective Noether forms for polynomials of arbitrarily many variables. These forms lead to small separation bounds for polynomials of arbitrarily many variables.
The approximate GCD of inexact polynomials part II: a multivariate algorithm
 In ISSAC 2004 Proc. 2004 Internat. Symp. Symbolic Algebraic Comput. (New
, 2004
"... This paper presents an algorithm and its implementation for computing the approximate GCD (greatest common divisor) of multivariate polynomials whose coefficients may be inexact. The method and the companion software appears to be the first practical package with such capabilities. The most signific ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This paper presents an algorithm and its implementation for computing the approximate GCD (greatest common divisor) of multivariate polynomials whose coefficients may be inexact. The method and the companion software appears to be the first practical package with such capabilities. The most significant features of the algorithm are its robustness and accuracy as demonstrated in the results of computational experiment. In addition, two variations of a squarefree factorization algorithm for multivariate polynomials are proposed as an application of the GCD algorithm.
Fast Computation of Special Resultants
, 2006
"... We propose fast algorithms for computing composed products and composed sums, as well as diamond products of univariate polynomials. These operations correspond to special multivariate resultants, that we compute using power sums of roots of polynomials, by means of their generating series. ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
We propose fast algorithms for computing composed products and composed sums, as well as diamond products of univariate polynomials. These operations correspond to special multivariate resultants, that we compute using power sums of roots of polynomials, by means of their generating series.
Fast algorithms for zerodimensional polynomial systems using duality
 APPLICABLE ALGEBRA IN ENGINEERING, COMMUNICATION AND COMPUTING
, 2001
"... Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the q ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the question of speeding up the linear algebra phase for the computation of minimal polynomials and rational parametrizations in A. We present new formulæ for the rational parametrizations, extending those of Rouillier, and algorithms extending ideas introduced by Shoup in the univariate case. Our approach is based on the Amodule structure of the dual space � A. An important feature of our algorithms is that we do not require � A to be free and of rank 1. The complexity of our algorithms for computing the minimal polynomial and the rational parametrizations are O(2 n D 5/2) and O(n2 n D 5/2) respectively, where D is the dimension of A. For fixed n, this is better than algorithms based on linear algebra except when the complexity of the available matrix product has exponent less than 5/2.