Results 1  10
of
103
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 318 (34 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
On efficient sparse integer matrix Smith normal form computations
, 2001
"... We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. W ..."
Abstract

Cited by 36 (16 self)
 Add to MetaCart
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (Elimination and/or BlackBox techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
The complexity of class polynomial computation via floating point approximations. ArXiv preprint
, 601
"... Abstract. We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
Abstract. We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmeticgeometric mean. Under the heuristic assumption, justified by experiments, that the correctness of the result is not perturbed by rounding errors, the algorithm runs in time “p “p ”” 3 2 O Dlog D  M Dlog D  ⊆ O ` Dlog 6+ε D  ´ ⊆ O ` h 2+ε´ for any ε> 0, where D is the CM discriminant, h is the degree of the class polynomial and M(n) is the time needed to multiply two nbit numbers. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary quadratic order and on a rigorously proven upper bound for the height of class polynomials. 1. Motivation and
Computing Simplicial Homology Based on Efficient Smith Normal Form Algorithms
, 2002
"... We recall that the calculation of homology with integer coecients of a simplicial complex reduces to the calculation of the Smith Normal Form of the boundary matrices which in general are sparse. We provide a review of several algorithms for the calculation of Smith Normal Form of sparse matrices an ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
We recall that the calculation of homology with integer coecients of a simplicial complex reduces to the calculation of the Smith Normal Form of the boundary matrices which in general are sparse. We provide a review of several algorithms for the calculation of Smith Normal Form of sparse matrices and compare their running times for actual boundary matrices. Then we describe alternative approaches to the calculation of simplicial homology. The last section then describes motivating examples and actual experiments with the GAP package that was implemented by the authors. These examples also include as an example of other homology theories some calculations of Lie algebra homology.
THE COMPLETE GENERATING FUNCTION FOR GESSEL WALKS IS ALGEBRAIC
"... Gessel walks are lattice walks in the quarter plane N2 which start at the origin (0, 0) ∈ N2 and consist only of steps chosen from the set {←, ↙, ↗, →}. We prove that if g(n; i, j) denotes the number of Gessel walks of length n which end at the point (i, j) ∈ N2, then the trivariate generating ser ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Gessel walks are lattice walks in the quarter plane N2 which start at the origin (0, 0) ∈ N2 and consist only of steps chosen from the set {←, ↙, ↗, →}. We prove that if g(n; i, j) denotes the number of Gessel walks of length n which end at the point (i, j) ∈ N2, then the trivariate generating series G(t; x, y) = X g(n; i, j)x i y j t n is an algebraic function. n,i,j≥0 1.
PRIMES is in P
 Ann. of Math
, 2002
"... We present an unconditional deterministic polynomialtime algorithm that determines whether an input number is prime or composite. 1 ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We present an unconditional deterministic polynomialtime algorithm that determines whether an input number is prime or composite. 1
On Lattice Reduction for Polynomial Matrices
 Journal of Symbolic Computation
, 2000
"... A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factori ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factorization; transformation to Hermite and Popov canonical form; rational and diophantine linear system solving; short vector computation.
Fast moment estimation in data streams in optimal space
 In Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC
, 2011
"... We give a spaceoptimal algorithm with update time O(log 2 (1/ε) log log(1/ε)) for (1 ± ε)approximating the pth frequency moment, 0 < p < 2, of a lengthn vector updated in a data stream. This provides a nearly exponential improvement in the update time complexity over the previous spaceoptim ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We give a spaceoptimal algorithm with update time O(log 2 (1/ε) log log(1/ε)) for (1 ± ε)approximating the pth frequency moment, 0 < p < 2, of a lengthn vector updated in a data stream. This provides a nearly exponential improvement in the update time complexity over the previous spaceoptimal algorithm of [KaneNelsonWoodruff, SODA 2010], which had update time Ω(1/ε 2). 1
Computing modular polynomials in quasilinear time
 Mathematics of Computation
"... Abstract. We analyse and compare the complexity of several algorithms for computing modular polynomials. Under the assumption that rounding errors do not influence the correctness of the result, which appears to be satisfied in practice, we show that an algorithm relying on floating point evaluation ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Abstract. We analyse and compare the complexity of several algorithms for computing modular polynomials. Under the assumption that rounding errors do not influence the correctness of the result, which appears to be satisfied in practice, we show that an algorithm relying on floating point evaluation of modular functions and on interpolation has a complexity that is up to logarithmic factors linear in the size of the computed polynomials. In particular, it obtains the classical modular polynomial Φℓ of prime level ℓ in time O ( ℓ 2 log 3 ℓM(ℓ) ) ⊆ O ( ℓ 3 log 4+ε ℓ), where M(ℓ) is the time needed to multiply two ℓbit numbers. Besides treating modular polynomials for Γ0 (ℓ), which are an important ingredient in many algorithms dealing with isogenies of elliptic curves, the algorithm is easily adapted to more general situations. Composite levels are handled just as easily as prime levels, as well as polynomials between a modular function and its transform of prime level, such as the Schläfli polynomials and their generalisations.
A Derandomized Sparse JohnsonLindenstrauss Transform
"... Recent work of [DasguptaKumarSarlós, STOC 2010] gave a sparse JohnsonLindenstrauss transform and left as a main open question whether their construction could be efficiently derandomized. We answer their question affirmatively by giving an alternative proof of their result requiring only bounded ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Recent work of [DasguptaKumarSarlós, STOC 2010] gave a sparse JohnsonLindenstrauss transform and left as a main open question whether their construction could be efficiently derandomized. We answer their question affirmatively by giving an alternative proof of their result requiring only bounded independence hash functions. Furthermore, the sparsity bound obtained in our proof is improved. The main ingredient in our proof is a spectral moment bound for quadratic forms that was recently used in [DiakonikolasKaneNelson, FOCS 2010].