Results 1  10
of
26
Multivariate Polynomials, Duality, and Structured Matrices
 J. of Complexity
, 1999
"... We first review the basic properties of the well known classes of Toeplitz, Hankel, Vandermonde, and other related structured matrices and reexamine their correlation to operations with univariate polynomials. Then we define some natural extensions of such classes of matrices based on their correlat ..."
Abstract

Cited by 48 (29 self)
 Add to MetaCart
We first review the basic properties of the well known classes of Toeplitz, Hankel, Vandermonde, and other related structured matrices and reexamine their correlation to operations with univariate polynomials. Then we define some natural extensions of such classes of matrices based on their correlation to multivariate polynomials. We describe the correlation in terms of the associated operators of multiplication in the polynomial ring and its dual space, which allows us to generalize these structures to the multivariate case. Multivariate Toeplitz, Hankel, and Vandermonde matrices, Bezoutians, algebraic residues and relations between them are studied. Finally, we show some applications of this study to rootfinding problems for a system of multivariate polynomial equations, where the dual space, algebraic residues, Bezoutians and other structured matrices play an important role. The developed techniques enable us to obtain a better insight into the major problems of multivariate polynomial computations and to improve substantially the known techniques of the study of these problems. In particular, we simplify and/or generalize the known reduction of the multivariate polynomial systems to matrix eigenproblem, the derivation of the Bézout and Bernshtein bounds on the number of the roots, and the construction of multiplication tables. From the algorithmic and computational complexity point, we yield acceleration by one order of magnitude of the known methods for some fundamental problems of solving multivariate polynomial systems of equations.
Matrices in Elimination Theory
, 1997
"... The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
The last decade has witnessed the rebirth of resultant methods as a powerful computational tool for variable elimination and polynomial system solving. In particular, the advent of sparse elimination theory and toric varieties has provided ways to exploit the structure of polynomials encountered in a number of scientific and engineering applications. On the other hand, the Bezoutian reveals itself as an important tool in many areas connected to elimination theory and has its own merits, leading to new developments in effective algebraic geometry. This survey unifies the existing work on resultants, with emphasis on constructing matrices that generalize the classic matrices named after Sylvester, Bézout and Macaulay. The properties of the different matrix formulations are presented, including some complexity issues, with an emphasis on variable elimination theory. We compare toric resultant matrices to Macaulay's matrix and further conjecture the generalization of Macaulay's exact ratio...
A reordered Schur factorization method for zerodimensional polynomial systems with multiple roots
 In Proc. ACM Intern. Symp. on Symbolic and Algebraic Computation
, 1997
"... We discuss the use of a single generic linear combination of multiplication matrices, and its reordered Schur factorization, to find the roots of a system of multivariate polynomial equations. The principal contribution of the paper is to show how to reduce the multivariate problem to a univariate p ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We discuss the use of a single generic linear combination of multiplication matrices, and its reordered Schur factorization, to find the roots of a system of multivariate polynomial equations. The principal contribution of the paper is to show how to reduce the multivariate problem to a univariate problem, even in the case of multiple roots, in a numerically stable way. 1 Introduction The technique of solving systems of multivariate polynomial equations via eigenproblems has become a topic of active research (with applications in computeraided design and control theory, for example) at least since the papers [2, 6, 9]. One may approach the problem via various resultant formulations or by Grobner bases. As more understanding is gained, it is becoming clearer that eigenvalue problems are the "weakly nonlinear nucleus to which the original, strongly nonlinear task may be reduced"[13]. Early works concentrated on the case of simple roots. An example of such was the paper [5], which use...
Camera pose and calibration from 4 or 5 known 3D points
 In Proc. 7th Int. Conf. on Computer Vision
, 1999
"... We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibratio ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform ’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in nondegenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.
Controlled Iterative Methods for Solving Polynomial Systems
, 1998
"... For a system of polynomial equations, we seek its specified root, maximizing or minimizing the absolute value of a fixed polynomial over all roots of the system. The latter requirement to a root, complicating the already difficult classical problem, is motivated by several practical applications. We ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
For a system of polynomial equations, we seek its specified root, maximizing or minimizing the absolute value of a fixed polynomial over all roots of the system. The latter requirement to a root, complicating the already difficult classical problem, is motivated by several practical applications. We first reduce the solution to the computation of the eigenvector of an associated matrix. Our novel treatment of this rather customary stage enables us to unify several known approaches and to simplify substantially the solution of an overconstrained polynomial system having only a simple root or a few roots. Likewise, when the reduction of a general polynomial system to an eigenproblem relies on the Gröbner basis techniques, we also obtain substantial simplification. Then we elaborate application of the power method and the (shifted) inverse power method to the solution of the resulting eigenproblem. Our elaboration is not straightforward since we achieve the computation preserving the sparsity and the structure of the associated matrix involved. This enables the decrease of the arithmetic cost by roughly factor N , denoting the dimension of the associated resultant matrix. Furthermore, our experiments show that our computations can be performed numerically, with single or double precision arithmetic, and the iteration converged to the specified root quite fast.
Computer algebra and algebraic geometry  achievements and perspectives
 J. SYMBOLIC COMPUT
, 2000
"... In this survey I should like to introduce some concepts of algebraic geometry and try to demonstrate the fruitful interaction between algebraic geometry and computer algebra and, more generally, between mathematics and computer science. One of the aims of this paper is to show, by means of example ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
In this survey I should like to introduce some concepts of algebraic geometry and try to demonstrate the fruitful interaction between algebraic geometry and computer algebra and, more generally, between mathematics and computer science. One of the aims of this paper is to show, by means of examples, the usefulness of computer algebra to mathematical research. Computer algebra itself is a highly diversified discipline with applications to various areas of mathematics; many of these may be found in numerous research papers, proceedings or textbooks (cf. Buchberger and Winkler, 1998; Cohen et al., 1999; Matzat et al., 1998; ISSAC, 1988–1998). Here, I concentrate mainly on Gröbner bases and leave aside many other topics of computer algebra (cf. Davenport et al., 1988; Von zur Gathen and Gerhard, 1999; Grabmeier et al., 2000). In particular, I do not mention (multivariate) polynomial factorization, another major and important tool in computational algebraic geometry. Gröbner bases were introduced originally by Buchberger as a computational tool for testing solvability of a system of polynomial equations, to count the number of solutions (with multiplicities) if this number is finite and, more algebraically, to compute in the quotient ring modulo the given polynomials. Since then, Gröbner bases have become the major computational tool, not only in algebraic geometry. The importance of Gröbner bases for mathematical research in algebraic geometry is obvious and nowadays their use needs hardly any justification. Indeed, chapters on Gröbner bases and Buchberger’s algorithm (Buchberger, 1965) have been incorporated in many new textbooks on algebraic geometry such as the books of Cox et al. (1992, 1998) or the recent books of Eisenbud (1995) and Vasconcelos (1998), not to mention textbooks which are devoted exclusively to Gröbner bases, such as Adams and Loustaunou (1994),
Gröbner bases, Hbases and interpolation
 Trans. Amer. Math. Soc
, 1999
"... The paper is concerned with a construction for H–bases of polynomial ideals without relying on term orders. The main ingredient is a homogeneous reduction algorithm which orthogonalizes leading terms instead of completely canceling them. This allows for an extension of Buchberger’s algorithm to co ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
The paper is concerned with a construction for H–bases of polynomial ideals without relying on term orders. The main ingredient is a homogeneous reduction algorithm which orthogonalizes leading terms instead of completely canceling them. This allows for an extension of Buchberger’s algorithm to construct these H–bases algorithmically. In addition, the close connection of this approach to minimal degree interpolation, and in particular to the least interpolation scheme due to de Boor and Ron, is pointed out.
Computing of a Specified Root of a Polynomial System of Equations Using Eigenvectors
, 2000
"... We propose new techniques and algorithms for the solution of a polynomial system of equations by matrix methods. For such a system, we seek its specified root, at which a fixed polynomial takes its maximum or minimum absolute value on the set of roots. We first reduce the solution to the computation ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We propose new techniques and algorithms for the solution of a polynomial system of equations by matrix methods. For such a system, we seek its specified root, at which a fixed polynomial takes its maximum or minimum absolute value on the set of roots. We first reduce the solution to the computation of the eigenvector of an associated matrix. Our novel treatment enables us to unify several known approaches and to simplify substantially the solution, in particular in the case of an overconstrained polynomial system having only a simple root or a few roots. We reduce the solution of a polynomial system to the eigenvector problem for a matrix that we dene implicitly, as a Schur complement in a sparse and structured matrix, and then we elaborate appropriate modification of the known efficient methods for the sparse eigenvector computation. This
The Complexity of the Algebraic Eigenproblem
, 1998
"... The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n 3 + (n log 2 n) log b). If the characteristic and minimum polynomials of the matrix A coincide with each other (which is the case for generic matrices of all classes of general and special matrices that we consider), then the latter deterministic cost bound can be replaced by the randomized bound O(KA (2n) + n 2 + (n log 2 n) log b) where KA (2n) denotes the cost of the computation of the 2n \Gamma 1 vectors A i v, i = 1; : : : ; 2n \Gamma 1, maximized over all ndimensional vectors v; KA (2n) = O(M(n) log n), for M(n) = o(n 2:376 ) denoting the arithmetic complexity of n \Theta n matrix multiplication. This bound on the complexity of the eigenproblem is optimal up to a logar...
ClosedForm Blind Channel Identification with MSK Inputs
 In Asilomar Conference
"... Blind equalization of non minimumphase FIR channels requires prior identification, for stability reasons. We present a novel algorithm able to identify a channel in presence of an unknown MSK modulated input (which can be viewed as an approximation of the GMSK modulation used in GSM mobile systems), ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Blind equalization of non minimumphase FIR channels requires prior identification, for stability reasons. We present a novel algorithm able to identify a channel in presence of an unknown MSK modulated input (which can be viewed as an approximation of the GMSK modulation used in GSM mobile systems), by resorting only to output second order moments. Blind identification is made possible because the input is not circular. It is shown that this approach leads to a system of L quadrics in L unknowns, if L denotes the number of taps of the unknown FIR channel. This system is then solved with the help of an original algorithm based on resultant techniques. Performances in terms of Bit Error Rates are eventually reported.