Results 1  10
of
165
On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
 In STOC
, 2005
"... Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear co ..."
Abstract

Cited by 364 (6 self)
 Add to MetaCart
(Show Context)
Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. We also present a (classical) publickey cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worstcase quantum hardness of SVP and SIVP. Previous latticebased publickey cryptosystems such as the one by Ajtai and Dwork were based only on uniqueSVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Õ(n2) and encrypting a message increases its size by a factor of Õ(n) (in previous cryptosystems these values are Õ(n4) and Õ(n2), respectively). In fact, under the assumption that all parties share a random bit string of length Õ(n2), the size of the public key can be reduced to Õ(n). 1
A Sieve Algorithm for the Shortest Lattice Vector Problem
, 2001
"... We present a randomized 2 O(n) time algorithm to compute a shortest nonzero vector in an ndimensional rational lattice. The best known time upper bound for this problem was 2 O(n log n) ..."
Abstract

Cited by 211 (3 self)
 Add to MetaCart
We present a randomized 2 O(n) time algorithm to compute a shortest nonzero vector in an ndimensional rational lattice. The best known time upper bound for this problem was 2 O(n log n)
Trapdoors for Hard Lattices and New Cryptographic Constructions
, 2007
"... We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “ha ..."
Abstract

Cited by 191 (26 self)
 Add to MetaCart
We show how to construct a variety of “trapdoor ” cryptographic tools assuming the worstcase hardness of standard lattice problems (such as approximating the shortest nonzero vector to within small factors). The applications include trapdoor functions with preimage sampling, simple and efficient “hashandsign ” digital signature schemes, universally composable oblivious transfer, and identitybased encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a Gaussianlike probability distribution whose standard deviation is essentially the length of the longest vector in the basis. In particular, the crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis. ∗ Supported by the Herbert Kunzel Stanford Graduate Fellowship. † This material is based upon work supported by the National Science Foundation under Grants CNS0716786 and CNS0749931. Any opinions, findings, and conclusions or recommedations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ‡ The majority of this work was performed while at SRI International. 1 1
Authenticating Pervasive Devices with Human Protocols
, 2005
"... Abstract. Forgery and counterfeiting are emerging as serious security risks in lowcost pervasive computing devices. These devices lack the computational, storage, power, and communication resources necessary for most cryptographic authentication schemes. Surprisingly, lowcost pervasive devices lik ..."
Abstract

Cited by 167 (5 self)
 Add to MetaCart
Abstract. Forgery and counterfeiting are emerging as serious security risks in lowcost pervasive computing devices. These devices lack the computational, storage, power, and communication resources necessary for most cryptographic authentication schemes. Surprisingly, lowcost pervasive devices like Radio Frequency Identification (RFID) tags share similar capabilities with another weak computing device: people. These similarities motivate the adoption of techniques from humancomputer security to the pervasive computing setting. This paper analyzes a particular humantocomputer authentication protocol designed by Hopper and Blum (HB), and shows it to be practical for lowcost pervasive devices. We offer an improved, concrete proof of security for the HB protocol against passive adversaries. This paper also offers a new, augmented version of the HB protocol, named HB +, that is secure against active adversaries. The HB + protocol is a novel, symmetric authentication protocol with a simple, lowcost implementation. We prove the security of the HB + protocol against active adversaries based on the hardness of the Learning Parity with Noise (LPN) problem.
Secure human identification protocols
 In Asiacrypt
, 2001
"... Abstract. One interesting and important challenge for the cryptologic community is that of providing secure authentication and identification for unassisted humans. There are a range of protocols for secure identification which require various forms of trusted hardware or software, aimed at protecti ..."
Abstract

Cited by 127 (3 self)
 Add to MetaCart
Abstract. One interesting and important challenge for the cryptologic community is that of providing secure authentication and identification for unassisted humans. There are a range of protocols for secure identification which require various forms of trusted hardware or software, aimed at protecting privacy and financial assets. But how do we verify our identity, securely, when we don’t have or don’t trust our smart card, palmtop, or laptop? In this paper, we provide definitions of what we believe to be reasonable goals for secure human identification. We demonstrate that existing solutions do not meet these reasonable definitions. Finally, we provide solutions which demonstrate the feasibility of the security conditions attached to our definitions, but which are impractical for use by humans. 1
Lossy Trapdoor Functions and Their Applications
, 2007
"... We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of lattice problems. Using lossy TDFs, we develop a ..."
Abstract

Cited by 126 (21 self)
 Add to MetaCart
(Show Context)
We propose a new general primitive called lossy trapdoor functions (lossy TDFs), and realize it under a variety of different number theoretic assumptions, including hardness of the decisional DiffieHellman (DDH) problem and the worstcase hardness of lattice problems. Using lossy TDFs, we develop a new approach for constructing several important cryptographic primitives, including (injective) trapdoor functions, collisionresistant hash functions, oblivious transfer, and chosen ciphertextsecure cryptosystems. All of the constructions are simple, efficient, and blackbox. These results resolve some longstanding open problems in cryptography. They give the first known injective trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCAsecure cryptosystem based solely on the worstcase complexity of lattice problems.
Simultaneous hardcore bits and cryptography against memory attacks
 IN TCC
, 2009
"... This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptograp ..."
Abstract

Cited by 116 (11 self)
 Add to MetaCart
(Show Context)
This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating sidechannel attack against cryptosystems, termed the “memory attack”, was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever stored in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES. We show that the publickey encryption scheme of Regev (STOC 2005), and the identitybased encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secretkey, or more generally, can compute an arbitrary function of the secretkey of bounded output length. This is done without increasing the size of the secretkey, and without introducing any
What Can We Learn Privately?
 49TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2008
"... Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large reallife data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or sp ..."
Abstract

Cited by 99 (9 self)
 Add to MetaCart
(Show Context)
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large reallife data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. We present several basic results that demonstrate general feasibility of private learning and relate several models previously studied separately in the contexts of privacy and standard learning.
A subexponentialtime quantum algorithm for the dihedral hidden subgroup problem
, 2003
"... Abstract. We present a quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity 2O(√log N). In this problem an oracle computes a function f on the dihedral group DN which is invariant under a hidden reflection in DN. By contrast, the classical query complexity ..."
Abstract

Cited by 77 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present a quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity 2O(√log N). In this problem an oracle computes a function f on the dihedral group DN which is invariant under a hidden reflection in DN. By contrast, the classical query complexity of DHSP is O ( √ N). The algorithm also applies to the hidden shift problem for an arbitrary finitely generated abelian group. The algorithm begins as usual with a quantum character transform, which in the case of DN is essentially the abelian quantum Fourier transform. This yields the name of a group representation of DN, which is not by itself useful, and a state in the representation, which is a valuable but indecipherable qubit. The algorithm proceeds by repeatedly pairing two unfavorable qubits to make a new qubit in a more favorable representation of DN. Once the algorithm obtains certain target representations, direct measurements reveal the hidden subgroup.
Better key sizes (and attacks) for LWEbased encryption
 In CTRSA
, 2011
"... We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
We analyze the concrete security and key sizes of theoretically sound latticebased encryption schemes based on the “learning with errors ” (LWE) problem. Our main contributions are: (1) a new lattice attack on LWE that combines basis reduction with an enumeration algorithm admitting a time/success tradeoff, which performs better than the simple distinguishing attack considered in prior analyses; (2) concrete parameters and security estimates for an LWEbased cryptosystem that is more compact and efficient than the wellknown schemes from the literature. Our new key sizes are up to 10 times smaller than prior examples, while providing even stronger concrete security levels.