Results 1  10
of
175
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 52 (19 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
On efficient sparse integer matrix Smith normal form computations
, 2001
"... We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. W ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (Elimination and/or BlackBox techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
On the complexity of the D5 principle
 In Proc. of Transgressive Computing 2006
, 2006
"... The D5 Principle was introduced in 1985 by Jean Della Dora, Claire Dicrescenzo and Dominique Duval in their celebrated note “About a new method for computing in algebraic number fields”. This innovative approach automatizes reasoning based on case discussion and is also known as “Dynamic Evaluation” ..."
Abstract

Cited by 32 (21 self)
 Add to MetaCart
The D5 Principle was introduced in 1985 by Jean Della Dora, Claire Dicrescenzo and Dominique Duval in their celebrated note “About a new method for computing in algebraic number fields”. This innovative approach automatizes reasoning based on case discussion and is also known as “Dynamic Evaluation”. Applications of the D5 Principle have been made in Algebra, Computer Algebra, Geometry and Logic. Many algorithms for solving polynomial systems symbolically need to perform standard operations, such as GCD computations, over coefficient rings that are direct products of fields rather than fields. We show in this paper how asymptotically fast algorithms for polynomials over fields can be adapted to this more general context, thanks to the D5 Principle. 1
User interface design with matrix algebra
 ACM Transactions on CHI
, 2004
"... It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some importa ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some important design issues and issues of usability. Matrix algebra, in particular, is useful for explicit calculation and for proof of various user interface properties. With matrix algebra, we are able to undertake with ease unusally thorough reviews of real user interfaces: this article examines a mobile phone, a handheld calculator and a digital multimeter as case studies, and draws general conclusions about the approach and its relevance to design.
Approximate greatest common divisors of several polynomials with linearly constrained coefficients and singular polynomials
 Manuscript
, 2006
"... We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the defo ..."
Abstract

Cited by 27 (13 self)
 Add to MetaCart
(Show Context)
We consider the problem of computing minimal real or complex deformations to the coefficients in a list of relatively prime real or complex multivariate polynomials such that the deformed polynomials have a greatest common divisor (GCD) of at least a given degree k. In addition, we restrict the deformed coefficients by a given set of linear constraints, thus introducing the linearly constrained approximate GCD problem. We present an algorithm based on a version of the structured total least norm (STLN) method and demonstrate, on a diverse set of benchmark polynomials, that the algorithm in practice computes globally minimal approximations. As an application of the linearly constrained approximate GCD problem, we present an STLNbased method that computes for a real or complex polynomial the nearest real or complex polynomial that has a root of multiplicity at least k. We demonstrate that the algorithm in practice computes, on the benchmark polynomials given in the literature, the known globally optimal nearest singular polynomials. Our algorithms can handle, via randomized preconditioning, the difficult case when the nearest solution to a list of real input polynomials actually has nonreal complex coefficients.
Fast Computation of Special Resultants
, 2006
"... We propose fast algorithms for computing composed products and composed sums, as well as diamond products of univariate polynomials. These operations correspond to special multivariate resultants, that we compute using power sums of roots of polynomials, by means of their generating series. ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
We propose fast algorithms for computing composed products and composed sums, as well as diamond products of univariate polynomials. These operations correspond to special multivariate resultants, that we compute using power sums of roots of polynomials, by means of their generating series.
Computing Parametric Geometric Resolutions
, 2001
"... Given a polynomial system of n equations in n unknowns that depends on some parameters, we de ne the notion of parametric geometric resolution as a means to represent some generic solutions in terms of the parameters. The coefficients of this resolution are rational functions of the parameters; we f ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
(Show Context)
Given a polynomial system of n equations in n unknowns that depends on some parameters, we de ne the notion of parametric geometric resolution as a means to represent some generic solutions in terms of the parameters. The coefficients of this resolution are rational functions of the parameters; we first show that their degree is bounded by the Bézout number d n , where d is a bound on the degrees of the input system. We then present a probabilistic algorithm to compute such a resolution; in short, its complexity is polynomial in the size of the output and the probability of success is controlled by a quantity polynomial in the Bézout number. We present several applications of this process, to computations in the Jacobian of hyperelliptic curves and to questions of real geometry.
Distribution results for lowweight binary representations for pairs of integers
 THEORET. COMPUT. SCI
, 2004
"... We discuss an optimal method for the computation of linear combinations of elements of Abelian groups, which uses signed digit expansions. This has applications in elliptic curve cryptography. We compute the expected number of operations asymptotically (including a periodically oscillating second o ..."
Abstract

Cited by 21 (16 self)
 Add to MetaCart
(Show Context)
We discuss an optimal method for the computation of linear combinations of elements of Abelian groups, which uses signed digit expansions. This has applications in elliptic curve cryptography. We compute the expected number of operations asymptotically (including a periodically oscillating second order term) and prove a central limit theorem. Apart from the usual righttoleft (i.e., least significant digit first) approach we also discuss a lefttoright computation of the expansions. This exhibits fractal structures that are studied in some detail.
The IPS Compiler: Optimizations, Variants and Concrete Efficiency
, 2011
"... In recent work, Ishai, Prabhakaran and Sahai (CRYPTO 2008) presented a new compiler (hereafter the IPS compiler) for constructing protocols that are secure in the presence of malicious adversaries without an honest majority from protocols that are only secure in the presence of semihonest adversari ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In recent work, Ishai, Prabhakaran and Sahai (CRYPTO 2008) presented a new compiler (hereafter the IPS compiler) for constructing protocols that are secure in the presence of malicious adversaries without an honest majority from protocols that are only secure in the presence of semihonest adversaries. The IPS compiler has many important properties: it provides a radically different way of obtaining security in the presence of malicious adversaries with no honest majority, it is blackbox in the underlying semihonest protocol, and it has excellent asymptotic efficiency. In this paper, we study the IPS compiler from a number different angles. We present an efficiency improvement of the “watchlist setup phase ” of the compiler that also facilitates a simpler and tighter analysis of the cheating probability. In addition, we present a conceptually simpler variant that uses protocols that are secure in the presence of covert adversaries as its basic building block. This variant can be used to achieve more efficient asymptotic security, as we show regarding blackbox constructions of malicious oblivious transfer from semihonest oblivious transfer. In addition, it deepens our understanding of the model of security in the presence of covert adversaries. Finally, we analyze the IPS compiler from a concrete efficiency perspective and demonstrate that in some cases it can be competitive with the best efficient protocols currently known.