Results 1  10
of
61
Lossy Trapdoor Functions and Their Applications
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 80 (2007)
, 2007
"... We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we ..."
Abstract

Cited by 87 (19 self)
 Add to MetaCart
We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of standard lattice problems. Using lossy TDFs, we develop a new approach for constructing many important cryptographic primitives, including standard trapdoor functions, CCAsecure cryptosystems, collisionresistant hash functions, and more. All of our constructions are simple, efficient, and blackbox. Taken all together, these results resolve some longstanding open problems in cryptography. They give the first known (injective) trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCAsecure cryptosystem based solely on worstcase lattice assumptions.
Bonsai Trees, or How to Delegate a Lattice Basis
, 2010
"... We introduce a new latticebased cryptographic structure called a bonsai tree, and use it to resolve some important open problems in the area. Applications of bonsai trees include: • An efficient, stateless ‘hashandsign ’ signature scheme in the standard model (i.e., no random oracles), and • The ..."
Abstract

Cited by 69 (5 self)
 Add to MetaCart
We introduce a new latticebased cryptographic structure called a bonsai tree, and use it to resolve some important open problems in the area. Applications of bonsai trees include: • An efficient, stateless ‘hashandsign ’ signature scheme in the standard model (i.e., no random oracles), and • The first hierarchical identitybased encryption (HIBE) scheme (also in the standard model) that does not rely on bilinear pairings. Interestingly, the abstract properties of bonsai trees seem to have no known realization in conventional numbertheoretic cryptography. 1
Efficient lattice (H)IBE in the standard model
 In EUROCRYPT 2010, LNCS
, 2010
"... Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
Abstract. We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptivelysecure IBE and a Hierarchical IBE. 1
Efficient Fully Homomorphic Encryption from (Standard
 LWE, FOCS 2011, IEEE 52nd Annual Symposium on Foundations of Computer Science, IEEE
, 2011
"... We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worstcase hardness of “short vector problems ” on arbitrary lattices. Our construction improves on ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worstcase hardness of “short vector problems ” on arbitrary lattices. Our construction improves on previous works in two aspects: 1. We show that “somewhat homomorphic ” encryption can be based on LWE, using a new relinearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. We deviate from the “squashing paradigm ” used in all previous works. We introduce a new dimensionmodulus reduction technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. Our scheme has very short ciphertexts and we therefore use it to construct an asymptotically efficient LWEbased singleserver private information retrieval (PIR) protocol. The communication complexity of our protocol (in the publickey model) is k · polylog(k) + log DB  bits per singlebit query (here, k is a security parameter). ∗ nd
GENERATING SHORTER BASES FOR HARD RANDOM LATTICES
, 2009
"... We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis dire ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
We revisit the problem of generating a “hard” random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis directly corresponds to milder underlying complexity assumptions and smaller key sizes. The contributions of this work are twofold. First, using the Hermite normal form as an organizing principle, we simplify and generalize an approach due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by tightening the length of the output basis essentially to the optimum value.
Candidate Multilinear Maps from Ideal Lattices and Applications
, 2012
"... Wedescribeplausiblelatticebasedconstructionswithpropertiesthatapproximatethesoughtafter multilinear maps in harddiscretelogarithm groups, and show that some applications of such multilinear maps can be realized using our approximations. The security of our constructions relies on seemingly hard ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
Wedescribeplausiblelatticebasedconstructionswithpropertiesthatapproximatethesoughtafter multilinear maps in harddiscretelogarithm groups, and show that some applications of such multilinear maps can be realized using our approximations. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function.
Lattice basis delegation in fixed dimension and shorterciphertext hierarchical IBE
 In Advances in Cryptology — CRYPTO 2010, Springer LNCS 6223
, 2010
"... Abstract. We present a technique for delegating a short lattice basis that has the advantage of keeping the lattice dimension unchanged upon delegation. Building on this result, we construct two new hierarchical identitybased encryption (HIBE) schemes, with and without random oracles. The resulting ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
Abstract. We present a technique for delegating a short lattice basis that has the advantage of keeping the lattice dimension unchanged upon delegation. Building on this result, we construct two new hierarchical identitybased encryption (HIBE) schemes, with and without random oracles. The resulting systems are very different from earlier latticebased HIBEs and in some cases result in shorter ciphertexts and private keys. We prove security from classic lattice hardness assumptions. 1
Candidate indistinguishability obfuscation and functional encryption for all circuits
 In FOCS
, 2013
"... In this work, we study indistinguishability obfuscation and functional encryption for general circuits: Indistinguishability obfuscation requires that given any two equivalent circuits C0 and C1 of similar size, the obfuscations of C0 and C1 should be computationally indistinguishable. In functional ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
In this work, we study indistinguishability obfuscation and functional encryption for general circuits: Indistinguishability obfuscation requires that given any two equivalent circuits C0 and C1 of similar size, the obfuscations of C0 and C1 should be computationally indistinguishable. In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SKC to decrypt a ciphertext CTx = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually. We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomialsize circuits. We accomplish this goal in three steps: • We describe a candidate construction for indistinguishability obfuscation for NC 1 circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles. • We show how to use indistinguishability obfuscation for NC 1 together with Fully Homomorphic Encryption (with decryption in NC 1) to achieve indistinguishability obfuscation for all circuits.
Limits on the hardness of lattice problems in ℓp norms
 In IEEE Conference on Computational Complexity
, 2007
"... In recent years, several papers have established limits on the computational difficulty of lattice problems, focusing primarily on the ℓ2 (Euclidean) norm. We demonstrate close analogues of these results in ℓp norms, for every 2 < p ≤ ∞. In particular, for lattices of dimension n: • Approximating ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
In recent years, several papers have established limits on the computational difficulty of lattice problems, focusing primarily on the ℓ2 (Euclidean) norm. We demonstrate close analogues of these results in ℓp norms, for every 2 < p ≤ ∞. In particular, for lattices of dimension n: • Approximating the closest vector problem, the shortest vector problem, and other related problems to within O ( √ n) factors (or O ( √ n log n) factors, for p = ∞) is in coNP. • Approximating the closest vector and bounded distance decoding problems with preprocessing to within O ( √ n) factors can be accomplished in deterministic polynomial time. • Approximating several problems (such as the shortest independent vectors problem) to within Õ(n) factors in the worst case reduces to solving the averagecase problems defined in prior works (Ajtai, STOC 1996; Micciancio and Regev, SIAM J. on Computing 2007; Regev, STOC 2005). Our results improve prior approximation factors for ℓp norms by up to √ n factors. Taken all together, they complement recent reductions from the ℓ2 norm to ℓp norms (Regev and Rosen, STOC 2006), and provide some evidence that lattice problems in ℓp norms (for p> 2) may not be substantially harder than they are in the ℓ2 norm. One of our main technical contributions is a very general analysis of Gaussian distributions over lattices, which may be of independent interest. Our proofs employ analytical techniques of Banaszczyk that, to our knowledge, have yet to be exploited in computer science. 1
Realizing hashandsign signatures under standard assumptions
 In Advances in Cryptology – EUROCRYPT ’09, volume 5479 of LNCS
, 2009
"... Currently, there are relatively few instances of “hashandsign ” signatures in the standard model. Moreover, most current instances rely on strong and less studied assumptions such as the Strong RSA and qStrong DiffieHellman assumptions. In this paper, we present a new approach for realizing hash ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Currently, there are relatively few instances of “hashandsign ” signatures in the standard model. Moreover, most current instances rely on strong and less studied assumptions such as the Strong RSA and qStrong DiffieHellman assumptions. In this paper, we present a new approach for realizing hashandsign signatures in the standard model. In our approach, a signer associates each signature with an index i that represents how many signatures that signer has issued up to that point. Then, to make use of this association, we create simple and efficient techniques that restrict an adversary which makes q signature requests to forge on an index no greater than 2 ⌈lg(q) ⌉ < 2q. Finally, we develop methods for dealing with this restricted adversary. Our approach requires that the signer maintain a small amount of state — a counter of the number of signatures issued. We achieve two new realizations for hashandsign signatures respectively based on the RSA assumption and the Computational DiffieHellman assumption in bilinear groups. 1