Results 1  10
of
73
Fully homomorphic encryption using ideal lattices
 In Proc. STOC
, 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Abstract

Cited by 663 (17 self)
 Add to MetaCart
(Show Context)
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Latticebased cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a publickey ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a serveraided cryptosystem.
Worstcase to averagecase reductions based on Gaussian measures
 SIAM J. on Computing
, 2004
"... We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest indepe ..."
Abstract

Cited by 131 (23 self)
 Add to MetaCart
(Show Context)
We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well known closest vector problem). The approximation factor we obtain is nlog O(1) n for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper (STOC, 1996), up to the strongest previously known results by Micciancio (SIAM J. on Computing, 2004). Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the highdimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are twofold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged. 1
Bonsai Trees, or How to Delegate a Lattice Basis
, 2010
"... We introduce a new latticebased cryptographic structure called a bonsai tree, and use it to resolve some important open problems in the area. Applications of bonsai trees include: • An efficient, stateless ‘hashandsign ’ signature scheme in the standard model (i.e., no random oracles), and • The ..."
Abstract

Cited by 123 (7 self)
 Add to MetaCart
(Show Context)
We introduce a new latticebased cryptographic structure called a bonsai tree, and use it to resolve some important open problems in the area. Applications of bonsai trees include: • An efficient, stateless ‘hashandsign ’ signature scheme in the standard model (i.e., no random oracles), and • The first hierarchical identitybased encryption (HIBE) scheme (also in the standard model) that does not rely on bilinear pairings. Interestingly, the abstract properties of bonsai trees seem to have no known realization in conventional numbertheoretic cryptography. 1
(Leveled) Fully Homomorphic Encryption without Bootstrapping
"... We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary ..."
Abstract

Cited by 73 (9 self)
 Add to MetaCart
(Show Context)
We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomialsize circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or Ring LWE (RLWE) problems that have 2λ security against known attacks. We construct: • A leveled FHE scheme that can evaluate depthL arithmetic circuits (composed of fanin 2 gates) using Õ(λ·L3) pergate computation. That is, the computation is quasilinear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that can evaluate depthL arithmetic circuits (composed of fanin 2 gates) using Õ(λ2) pergate computation, which is independent of L. Security is based on RLWE for quasipolynomial factors. This construction uses bootstrapping as an
Latticebased Cryptography
, 2008
"... In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well a ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
(Show Context)
In this chapter we describe some of the recent progress in latticebased cryptography. Latticebased cryptographic constructions hold a great promise for postquantum cryptography, as they enjoy very strong security proofs based on worstcase hardness, relatively efficient implementations, as well as great simplicity. In addition, latticebased cryptography is believed to be secure against quantum computers. Our focus here
Generalized compact knapsacks are collision resistant
 In ICALP (2
, 2006
"... n.A step in the direction of creating efficient cryptographic functions based on worstcase hardness was ..."
Abstract

Cited by 58 (15 self)
 Add to MetaCart
(Show Context)
n.A step in the direction of creating efficient cryptographic functions based on worstcase hardness was
SWIFFT: A Modest Proposal for FFT Hashing
"... We propose SWIFFT, a collection of compression functions that are highly parallelizable and admit very efficient implementations on modern microprocessors. The main technique underlying our functions is a novel use of the Fast Fourier Transform (FFT) to achieve “diffusion, ” together with a linear ..."
Abstract

Cited by 51 (17 self)
 Add to MetaCart
(Show Context)
We propose SWIFFT, a collection of compression functions that are highly parallelizable and admit very efficient implementations on modern microprocessors. The main technique underlying our functions is a novel use of the Fast Fourier Transform (FFT) to achieve “diffusion, ” together with a linear combination to achieve compression and “confusion. ” We provide a detailed security analysis of concrete instantiations, and give a highperformance software implementation that exploits the inherent parallelism of the FFT algorithm. The throughput of our implementation is competitive with that of SHA256, with additional parallelism yet to be exploited. Our functions are set apart from prior proposals (having comparable efficiency) by a supporting asymptotic security proof: it can be formally proved that finding a collision in a randomlychosen function from the family (with noticeable probability) is at least as hard as finding short vectors in cyclic/ideal lattices in the worst case.
Compact McEliece Keys from Goppa Codes
 IN: SELECTED AREAS IN CRYPTOGRAPHY, 16TH ANNUAL INTERNATIONAL WORKSHOP, SAC 2009 BD. 5867, SPRINGER, 2009 (LECTURE
, 2009
"... The classical McEliece cryptosystem is built upon the class of Goppa codes, which remains secure to this date in contrast to many other families of codes but leads to very large public keys. Previous proposals to obtain short McEliece keys have primarily centered around replacing that class by oth ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
(Show Context)
The classical McEliece cryptosystem is built upon the class of Goppa codes, which remains secure to this date in contrast to many other families of codes but leads to very large public keys. Previous proposals to obtain short McEliece keys have primarily centered around replacing that class by other families of codes, most of which were shown to contain weaknesses, and at the cost of reducing in half the capability of error correction. In this paper we describe a simple way to reduce significantly the key size in McEliece and related cryptosystems using a subclass of Goppa codes, while also improving the efficiency of cryptographic operations to Õ(n) time, and keeping the capability of correcting the full designed number of errors in the binary case.