Results 1  10
of
22
MessageLocked Encryption and Secure Deduplication
, 2012
"... We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
We formalize a new cryptographic primitive, MessageLocked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (spaceefficient secure outsourced storage), a goal currently targeted by numerous cloudstorage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the samplethenextract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical
Better security for deterministic publickey encryption: The auxiliaryinput setting
 CRYPTO 2011, volume 6841 of LNCS
, 2011
"... Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Deterministic publickey encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized publickey encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high minentropy from the adversary’s point of view. In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high minentropy from the adversary’s point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees. We formalize a framework for studying the security of deterministic publickey encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on hardtoinvert auxiliary inputs.
Functionprivate identitybased encryption: Hiding the function in functional encryption
 Advances in Cryptology – CRYPTO ’13. Available as Cryptology ePrint Archive, Report 2013/283
, 2013
"... We put forward a new notion, function privacy, in identitybased encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated b ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We put forward a new notion, function privacy, in identitybased encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated by the need for providing predicate privacy in publickey searchable encryption. Formalizing such a notion, however, is not straightforward as given a decryption key it is always possible to learn some information on its corresponding identity by testing whether it correctly decrypts ciphertexts that are encrypted for specific identities. In light of such an inherent difficulty, any meaningful notion of function privacy must be based on the minimal assumption that, from the adversary’s point of view, identities that correspond to its given decryption keys are sampled from somewhat unpredictable distributions. We show that this assumption is in fact sufficient for obtaining a strong and realistic notion of function privacy. Loosely speaking, our framework requires that a decryption key corresponding to an identity sampled from any sufficiently unpredictable distribution is indistinguishable from a decryption key corresponding to an independently and uniformly sampled identity. Within our framework we develop an approach for designing functionprivate identitybased encryption schemes, leading to constructions that are based on standard assumptions in bilinear groups (DBDH, DLIN) and lattices (LWE). In addition to function privacy, our schemes are also anonymous, and thus yield the first publickey searchable encryption schemes that are provably
PolyMany Hardcore Bits for Any OneWay Function
, 2014
"... We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any oneway function. In the case the oneway function is injective or has polynomiallybounded preimage size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any oneway function. In the case the oneway function is injective or has polynomiallybounded preimage size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume the existence of differinginput obfuscation (diO), but of a form weaker than full auxiliaryinput diO. Our construction for injective oneway functions extends to extract hardcore bits on multiple, correlated inputs, yielding new DPKE schemes.
Learning with Rounding, Revisited New Reduction, Properties and Applications
"... Abstract. The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulustoerror ratio are superpolynomial. In this work we resolve the main open problem of [BPR12] and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulustoerror ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulustoerror ratio gives us greater security, which now follows from the worstcase hardness of GapSVP with polynomial (rather than superpolynomial) approximation factors. As a tool in the reduction, we show that there is a “lossy mode ” for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient minentropy, and very simple new constructions of deterministic encryption, lossy trapdoor functions and reusable extractors. Our approach is inspired by a technique of Goldwasser et al. [GKPV10] from ICS ’10, which implicitly showed the existence of a “lossy mode ” for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of superpolynomial) modulus and modulustoerror ratio.
Instantiating Random Oracles via UCEs
, 2013
"... This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; p ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; pointfunction obfuscation; OAEP; encryption secure for keydependent messages; encryption secure under relatedkey attack; proofs of storage; and adaptivelysecure garbled circuits with short tokens. We can take existing, natural and efficient ROM schemes and show that the instantiated scheme resulting from replacing the RO with a UCE function is secure in the standard model. In several cases this results in the first standardmodel schemes for these goals. The definition of UCEsecurity itself is quite simple, asking that outputs of the function look random given some “leakage, ” even if the adversary knows the key, as long as the leakage does not permit the adversary to compute the inputs.
Overcoming weak expectations
, 2012
"... Abstract. Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some nontrivial amount of (min) entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), w ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some nontrivial amount of (min) entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), where X is a weak source in question. We show an elementary inequality which essentially upper bounds such ‘weak expectation ’ by two terms, the first of which is independent of f, while the second only depends on the ‘variance ’ of f under uniform distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some ‘unexpected ’ results, in several cases noticeably simplifying/improving prior techniques for the same problem. Examples include nonmalleable extractors, leakageresilient symmetric encryption, alternative to the dense model theorem, seeddependent condensers and improved entropy loss for the leftover hash lemma. 1
Deterministic PublicKey Encryption for Adaptively Chosen Plaintext Distributions
, 2013
"... Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic publickey encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversariallychosen plaintext distributio ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Bellare, Boldyreva, and O’Neill (CRYPTO ’07) initiated the study of deterministic publickey encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversariallychosen plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme. We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic publickey encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by 2p, where p can be any predetermined polynomial in the security parameter. For example, with p = 0 we capture plaintext distributions that are independent of the public key, and with p = O(s log s)
A Counterexample to the Chain Rule for Conditional HILL Entropy
, 2013
"... Most entropy notions H(.) like Shannon or minentropy satisfy a chain rule stating that for random variables X,Z and A we have H(XZ,A) ≥ H(XZ) − A. That is, by conditioning on A the entropy of X can decrease by at most the bitlength A  of A. Such chain rules are known to hold for some computa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Most entropy notions H(.) like Shannon or minentropy satisfy a chain rule stating that for random variables X,Z and A we have H(XZ,A) ≥ H(XZ) − A. That is, by conditioning on A the entropy of X can decrease by at most the bitlength A  of A. Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictabilityentropy. For HILL entropy, the computational analogue of minentropy, the chain rule is of special interest and has found many applications, including leakageresilient cryptography, deterministic encryption and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: We construct joint distributions (X,Z,A), where A is a distribution over a single bit, such that the HILL entropy HHILL(XZ) is large but HHILL(XZ,A) is basically zero. Our counterexample just makes the minimal assumption that NP * P/poly. Under the stronger assumption that injective oneway function exist, we can make all the distributions efficiently samplable. Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.