Results 1  10
of
19
Correcting errors without leaking partial information
 In 37th Annual ACM Symposium on Theory of Computing (STOC
, 2005
"... This paper explores what kinds of information two parties must communicate in order to correct errors which occur in a shared secret string W. Any bits they communicate must leak a significant amount of information about W — that is, from the adversary’s point of view, the entropy of W will drop sig ..."
Abstract

Cited by 56 (10 self)
 Add to MetaCart
This paper explores what kinds of information two parties must communicate in order to correct errors which occur in a shared secret string W. Any bits they communicate must leak a significant amount of information about W — that is, from the adversary’s point of view, the entropy of W will drop significantly. Nevertheless, we construct schemes with which Alice and Bob can prevent an adversary from learning any useful information about W. Specifically, if the entropy of W is sufficiently high, then there is no function f(W) which the adversary can learn from the errorcorrection information with significant probability. This leads to several new results: (a) the design of noisetolerant “perfectly oneway” hash functions in the sense of Canetti et al. [7], which in turn leads to obfuscation of proximity queries for high entropy secrets W; (b) private fuzzy extractors [11], which allow one to extract uniformly random bits from noisy and nonuniform data W, while also insuring that no sensitive information about W is leaked; and (c) noise tolerance and stateless key reuse in the Bounded Storage Model, resolving the main open problem of Ding [10]. The heart of our constructions is the design of strong randomness extractors with the property that the source W can be recovered from the extracted randomness and any string W ′ which is close to W.
A.: On notions of security for deterministic encryption, and efficient constructions without random oracles. Full version of this paper
, 2008
"... Abstract. The study of deterministic publickey encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible ” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encr ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
Abstract. The study of deterministic publickey encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible ” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encryption schemes without random oracles. To do so, we propose a slightly weaker notion of security, saying that no partial information about encrypted messages should be leaked as long as each message is apriori hardtoguess given the others (while PRIV did not have the latter restriction). Nevertheless, we argue that this version seems adequate for many practical applications. We show equivalence of this definition to singlemessage and indistinguishabilitybased ones, which are easier to work with. Then we give general constructions of both chosenplaintext (CPA) and chosenciphertextattack (CCA) secure deterministic encryption schemes, as well as efficient instantiations of them under standard numbertheoretic assumptions. Our constructions build on the recentlyintroduced framework of Peikert and Waters (STOC ’08) for constructing CCAsecure probabilistic encryption schemes, extending it to the deterministicencryption setting as well.
Robust Key Generation from Signal Envelopes in Wireless Networks
, 2007
"... The broadcast nature of a wireless link provides a natural eavesdropping and intervention capability to an adversary. Thus, securing a wireless link is essential to the security of a wireless network, and key generation algorithms are necessary for securing wireless links. However, traditional key a ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
The broadcast nature of a wireless link provides a natural eavesdropping and intervention capability to an adversary. Thus, securing a wireless link is essential to the security of a wireless network, and key generation algorithms are necessary for securing wireless links. However, traditional key agreement algorithms can be very costly in many settings, e.g. in wireless adhoc networks, since they consume scarce resources such as bandwidth and battery power. Traditional key agreement algorithms are not suitable for wireless adhoc networks since they consume scarce resources such as bandwidth and battery power. This paper presents a novel approach that couples the physical layer characteristics of wireless networks with key generation algorithms. It is based on the wireless communication phenomenon known as the principle of reciprocity which states that in the absence of interference both transmitter and receiver experience the same signal envelope. The keyobservation here is that the signal envelope information can provide to the two transceivers two correlated random sources that provide sufficient amounts of entropy which can be used to extract a cryptographic key. In contrast, it is virtually impossible for a third party, which is not located at one of the transceiver’s position, to obtain or
Deterministic Encryption: Definitional Equivalences and Constructions without Random Oracles
, 2008
"... We strengthen the foundations of deterministic publickey encryption via definitional equivalences and standardmodel constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguish ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
We strengthen the foundations of deterministic publickey encryption via definitional equivalences and standardmodel constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguishability notion, and show them all equivalent. We then present a deterministic scheme for the secure encryption of uniformly and independently distributed messages based solely on the existence of trapdoor oneway permutations. We show a generalization of the construction that allows secure deterministic encryption of independent highentropy messages. Finally we show relations between deterministic and standard (randomized) encryption.
A provablesecurity treatment of the keywrap problem
 EUROCRYPT 2006, LNCS 4004
, 2006
"... Abstract. We give a provablesecurity treatment for the keywrap problem, providing definitions, constructions, and proofs. We suggest that keywrap’s goal is security in the sense of deterministic authenticatedencryption (DAE), a notion that we put forward. We also provide an alternative notion, a ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Abstract. We give a provablesecurity treatment for the keywrap problem, providing definitions, constructions, and proofs. We suggest that keywrap’s goal is security in the sense of deterministic authenticatedencryption (DAE), a notion that we put forward. We also provide an alternative notion, a pseudorandom injection (PRI), which we prove to be equivalent. We provide a DAE construction, SIV, analyze its concrete security, develop a blockcipherbased instantiation of it, and suggest that the method makes a desirable alternative to the schemes of the X9.102 draft standard. The construction incorporates a method to turn a PRF that operates on a string into an equally efficient PRF that operates on a vector of strings, a problem of independent interest. Finally, we consider IVbased authenticatedencryption (AE) schemes that are maximally forgiving of repeated IVs, a goal we formalize as misuseresistant AE. We show that a DAE scheme with a vectorvalued header, such as SIV, directly realizes this goal. 1
Chosenciphertext secure keyencapsulation based on Gap Hashed DiffieHellman
 In Proceedings of PKC 2007, volume 4450 of LNCS, pages 282 – 297, 2007. http://eprint.iacr.org/2007/036. (Cited on
, 2007
"... We propose a practical key encapsulation mechanism with a simple and intuitive design concept. Security against chosenciphertext attacks can be proved in the standard model under a new assumption, the Gap Hashed DiffieHellman (GHDH) assumption. The security reduction is tight and simple. Secure ke ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We propose a practical key encapsulation mechanism with a simple and intuitive design concept. Security against chosenciphertext attacks can be proved in the standard model under a new assumption, the Gap Hashed DiffieHellman (GHDH) assumption. The security reduction is tight and simple. Secure key encapsulation, combined with an appropriately secure symmetric encryption scheme, yields a hybrid publickey encryption scheme which is secure against chosenciphertext attacks. The implied encryption scheme is very efficient: compared to the previously most efficient scheme by Kurosawa and Desmedt [Crypto 2004] it has 128 bits shorter ciphertexts, between 2550% shorter public/secret keys, and it is slightly more efficient in terms of encryption/decryption speed. Furthermore, our scheme enjoys (the option of) public verifiability of the ciphertexts and it inherits all practical advantages of secure hybrid encryption. Our results extend to key encapsulation mechanisms based on the class of Gap Hashed MultiDiffieHellman (GHMDH) assumptions which is a natural generalization of GHDH.
Hedged publickey encryption: How to protect against bad randomness. IACR ePrint Archive, 2009. Full Version of this paper
"... Abstract. Publickey encryption schemes rely for their INDCPA security on permessage fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
Abstract. Publickey encryption schemes rely for their INDCPA security on permessage fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide publickey encryption schemes that achieve INDCPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call INDCDA. This hedged publickey encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RObased ways to make inpractice INDCPA schemes hedge secure with minimal software changes. We also provide nonRO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest. 1
A unified approach to deterministic encryption: New constructions and a connection to computational entropy
 TCC 2012, volume 7194 of LNCS
, 2012
"... We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessa ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide: • A construction from any trapdoor function that has sufficiently many hardcore bits. • A construction that provides “bounded ” multimessage security from lossy trapdoor functions. The security proofs for these schemes are enabled by three tools that are of broader interest: • A weaker and more precise sufficient condition for semantic security on a highentropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution ME, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.) • A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log 2 1/p. • A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakageresilient cryptography: leaking λ bits of information reduces the quality of computational entropy by a factor of 2 λ and its quantity by λ.
Obfuscating Point Functions with Multibit Output
 In EUROCRYPT 2008
"... Abstract. We study obfuscation of point functions with multibit output and other related functions. A point function with multibit output returns a string on a single input point and zero everywhere else. We provide a construction that obfuscates these functions. The construction is generic in the s ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. We study obfuscation of point functions with multibit output and other related functions. A point function with multibit output returns a string on a single input point and zero everywhere else. We provide a construction that obfuscates these functions. The construction is generic in the sense that it can use any perfectly oneway (POW) function or obfuscator for point functions. Analyzing this construction reveals gaps in the definition of obfuscation, specifically, that it does not guarantee security even under selfcomposition, a property needed in our analysis. Thus, we use obfuscation secure under composition. In particular, we show that composable obfuscation of multibit point functions exists if and only if composable obfuscation of point functions exists. Moreover, we show that this construction is secure based on statistically indistinguishable POW functions. However, if we relax the assumption to computational indistinguishability, then the construction satisfies a weaker notion of obfuscation. Finally, the same technique can be used to obfuscate setmembership predicates and functions, for polynomialsize sets.
Entropic security in quantum cryptography
, 2007
"... Abstract. We present two new definitions of security for quantum ciphers which are inspired by the definition of entropic security and entropic indistinguishability defined by Dodis and Smith. We prove the equivalence of these two new definitions. We also propose a generalization of a cipher describ ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. We present two new definitions of security for quantum ciphers which are inspired by the definition of entropic security and entropic indistinguishability defined by Dodis and Smith. We prove the equivalence of these two new definitions. We also propose a generalization of a cipher described by Dodis and Smith and show that it can actually encrypt n qubits using less than n bits of key under reasonable conditions and yet be secure in an information theoretic setting. This cipher also totally closes the gap between the key requirement of quantum ciphers and classical ciphers.