Results 1  10
of
13
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
An OutputSensitive Variant of the Baby Steps/Giant Steps Determinant Algorithm
, 2001
"... This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = 0 co ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = 0 covers the worst case. We describe a Monte Carlo algorithm that produces # in (n bit operations, again with standard matrix arithmetic. The corresponding bit complexity of the early termination Gaussian elimination method is 4# , which is always more, and that of the algorithm by [10] is (n 1+1/2
Certification of the QR Factor R, and of Lattice Basis Reducedness
"... Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not suffic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not sufficient with respect to n, and to the numerical properties of the basis, the algorithm will answer “failed”. Hence a positive answer is a rigorous certificate. For implementing the certificate itself, we propose a floating point algorithm for computing (certified) error bounds for the R factor of the QR factorization. This algorithm takes into account all possible approximation and rounding errors. The certificate may be implemented using matrix library routines only. We report experiments that show that for a reduced basis of adequate dimension and quality the certificate succeeds, and establish the effectiveness of the certificate. This effectiveness is applied for certifying the output of fastest existing floating point heuristics of LLL reduction, without slowing down the whole process.
Toeplitz and Hankel Meet Hensel and Newton: Nearly Optimal Algorithms and Their Practical Acceleration with Saturated Initialization
 Program in Computer Science, The Graduate
, 2004
"... We extend Hensel lifting for solving general and structured linear systems of equations to the rings of integers modulo nonprimes, e.g. modulo a power of two. This enables significant saving of word operations. We elaborate upon this approach in the case of Toeplitz linear systems. In this case, we ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We extend Hensel lifting for solving general and structured linear systems of equations to the rings of integers modulo nonprimes, e.g. modulo a power of two. This enables significant saving of word operations. We elaborate upon this approach in the case of Toeplitz linear systems. In this case, we initialize lifting with the MBA superfast algorithm, estimate that the overall bit operation (Boolean) cost of the solution is optimal up to roughly a logarithmic factor, and prove that the degeneration is unlikely even where the basic prime is fixed but the input matrix is random. We also comment on the extension of our algorithm to some other fundamental computations with (possibly singular) general and structured matrices and univariate polynomials as well as to the computation of the sign and the value of the determinant of an integer matrix.
An introspective algorithm for the integer determinant
 In: Proceedings of Transgressive Computing 2006
, 2006
"... ljk.imag.fr/membres/{JeanGuillaume.Dumas;Anna.Urbanska} We present an algorithm for computing the determinant of an integer matrix A. The algorithm is introspective in the sense that it uses several distinct algorithms that run in a concurrent manner. During the course of the algorithm partial resu ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
ljk.imag.fr/membres/{JeanGuillaume.Dumas;Anna.Urbanska} We present an algorithm for computing the determinant of an integer matrix A. The algorithm is introspective in the sense that it uses several distinct algorithms that run in a concurrent manner. During the course of the algorithm partial results coming from distinct methods can be combined. Then, depending on the current running time of each method, the algorithm can emphasize a particular variant. With the use of very fast modular routines for linear algebra, our implementation is an order of magnitude faster than other existing implementations. Moreover, we prove that the expected complexity of our algorithm is only O � n 3 log 2.5 (n�A�) � bit operations in the case of random dense matrices, where n is the dimension and �A � is the largest entry in the absolute value of the matrix. 1
Bayesian outtrees
 Uncertainty in Artifical Intelligence
, 2008
"... A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to effici ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to efficient Bayesian inference. The latent likelihood of the data and its gradients are computable in closed form via Tutte’s directed matrix tree theorem using determinants and inverses of the outLaplacian. This novel likelihood subsumes iid likelihood, is exchangeable and yields efficient unsupervised and semisupervised learning algorithms. In addition to handling taxonomy and phylogenetic datasets the outtree assumption performs surprisingly well as a semiparametric density estimator on standard iid datasets. Experiments with unsupervised and semisupervised learning are shown on various UCI and taxonomy datasets. 1
Additive Preconditioning and Aggregation in Matrix Computations ∗
"... Multiplicative preconditioning is a popular SVDbased techniques for the solution of linear systems of equations, but our SVDfree additive preconditioners are more readily available and better preserve matrix structure. We combine additive preconditioning with aggregation and other relevant techniq ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Multiplicative preconditioning is a popular SVDbased techniques for the solution of linear systems of equations, but our SVDfree additive preconditioners are more readily available and better preserve matrix structure. We combine additive preconditioning with aggregation and other relevant techniques to facilitate the solution of linear systems of equations and some other fundamental matrix computations. Our analysis and experiments show the power of our approach, guide us in selecting most effective policies of preconditioning and aggregation, and provide some new insights into these and related subjects of matrix computations.
unknown title
"... A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to effici ..."
Abstract
 Add to MetaCart
A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to efficient Bayesian inference. The latent likelihood of the data and its gradients are computable in closed form via Tutte’s directed matrix tree theorem using determinants and inverses of the outLaplacian. This novel likelihood subsumes iid likelihood, is exchangeable and yields efficient unsupervised and semisupervised learning algorithms. In addition to handling taxonomy and phylogenetic datasets the outtree assumption performs surprisingly well as a semiparametric density estimator on standard iid datasets. Experiments with unsupervised and semisupervised learning are shown on various UCI and taxonomy datasets. 1
Some Inequalities Related to the Seysen Measure of a Lattice ∗
, 2009
"... Given a lattice L, a basis B of L together with its dual B ∗ P, the orthogonality measure S(B) = i bi2b ∗ i   2 of B was introduced by M. Seysen [9] in 1993. This measure (the Seysen measure in the sequel, also known as the Seysen metric [11]) is at the heart of the Seysen lattice reduction ..."
Abstract
 Add to MetaCart
Given a lattice L, a basis B of L together with its dual B ∗ P, the orthogonality measure S(B) = i bi2b ∗ i   2 of B was introduced by M. Seysen [9] in 1993. This measure (the Seysen measure in the sequel, also known as the Seysen metric [11]) is at the heart of the Seysen lattice reduction algorithm and is linked with different geometrical properties of the basis [8, 7, 10, 11]. In this paper, we explicit different expressions for this measure as well as new inequalities.