Results 1  10
of
126
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 470 (37 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is errortolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce errorprone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
THE COMPLETE GENERATING FUNCTION FOR GESSEL WALKS IS ALGEBRAIC
"... Gessel walks are lattice walks in the quarter plane N2 which start at the origin (0, 0) ∈ N2 and consist only of steps chosen from the set {←, ↙, ↗, →}. We prove that if g(n; i, j) denotes the number of Gessel walks of length n which end at the point (i, j) ∈ N2, then the trivariate generating ser ..."
Abstract

Cited by 49 (10 self)
 Add to MetaCart
(Show Context)
Gessel walks are lattice walks in the quarter plane N2 which start at the origin (0, 0) ∈ N2 and consist only of steps chosen from the set {←, ↙, ↗, →}. We prove that if g(n; i, j) denotes the number of Gessel walks of length n which end at the point (i, j) ∈ N2, then the trivariate generating series G(t; x, y) = X g(n; i, j)x i y j t n is an algebraic function. n,i,j≥0 1.
On efficient sparse integer matrix Smith normal form computations
, 2001
"... We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. W ..."
Abstract

Cited by 40 (18 self)
 Add to MetaCart
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (Elimination and/or BlackBox techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
The complexity of class polynomial computation via floating point approximations. ArXiv preprint
, 601
"... Abstract. We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
Abstract. We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmeticgeometric mean. Under the heuristic assumption, justified by experiments, that the correctness of the result is not perturbed by rounding errors, the algorithm runs in time “p “p ”” 3 2 O Dlog D  M Dlog D  ⊆ O ` Dlog 6+ε D  ´ ⊆ O ` h 2+ε´ for any ε> 0, where D is the CM discriminant, h is the degree of the class polynomial and M(n) is the time needed to multiply two nbit numbers. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary quadratic order and on a rigorously proven upper bound for the height of class polynomials. 1. Motivation and
PRIMES is in P
 Ann. of Math
, 2002
"... We present an unconditional deterministic polynomialtime algorithm that determines whether an input number is prime or composite. 1 ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
We present an unconditional deterministic polynomialtime algorithm that determines whether an input number is prime or composite. 1
On Lattice Reduction for Polynomial Matrices
 Journal of Symbolic Computation
, 2000
"... A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factori ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
A simple algorithm for transformation to weak Popov form  essentially lattice reduction for polynomial matrices  is described and analyzed. The algorithm is adapted and applied to various tasks involving polynomial matrices: rank profile and determinant computation; unimodular triangular factorization; transformation to Hermite and Popov canonical form; rational and diophantine linear system solving; short vector computation.
Computing Simplicial Homology Based on Efficient Smith Normal Form Algorithms
, 2002
"... We recall that the calculation of homology with integer coecients of a simplicial complex reduces to the calculation of the Smith Normal Form of the boundary matrices which in general are sparse. We provide a review of several algorithms for the calculation of Smith Normal Form of sparse matrices an ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
We recall that the calculation of homology with integer coecients of a simplicial complex reduces to the calculation of the Smith Normal Form of the boundary matrices which in general are sparse. We provide a review of several algorithms for the calculation of Smith Normal Form of sparse matrices and compare their running times for actual boundary matrices. Then we describe alternative approaches to the calculation of simplicial homology. The last section then describes motivating examples and actual experiments with the GAP package that was implemented by the authors. These examples also include as an example of other homology theories some calculations of Lie algebra homology.
A Unified Approach to Solving the Harmonic Elimination Equations in Multilevel Converters
 IEEE Transactions on Power Electronics
, 2004
"... Abstract – A method is presented to compute the switching angles in a multilevel converter so as to produce the required fundamental voltage while at the same time not generate higher order harmonics. Previous work has shown that the transcendental equations characterizing the harmonic content can ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
(Show Context)
Abstract – A method is presented to compute the switching angles in a multilevel converter so as to produce the required fundamental voltage while at the same time not generate higher order harmonics. Previous work has shown that the transcendental equations characterizing the harmonic content can be converted to polynomial equations which are then solved using the method of resultants from elimination theory. A di!culty with this approach is that when there are several DC sources, the degrees of the polynomials are quite large making the computational burden of their resultant polynomials (as required by elimination theory) quite high. Here, it is shown that the theory of symmetric polynomials can be exploited to reduce the degree of the polynomial equations that must be solved which in turn greatly reduces the computational burden. In contrast to results reported in the literature that use iterative numerical techniques to solve these equations, the approach here produces all possible solutions. I.
Sparser JohnsonLindenstrauss Transforms
"... We give two different constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1+ε with high probability, while still achieving the asymptotically optimal number ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
We give two different constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1+ε with high probability, while still achieving the asymptotically optimal number of rows. These are the first constructions to provide subconstant sparsity for all values of parameters. Both constructions are also very simple: a vector can be embedded in two for loops. Such distributions can be used to speed up applications where ℓ2 dimensionality reduction is used.