Results 1  10
of
78
NonMalleable Cryptography
 SIAM Journal on Computing
, 2000
"... The notion of nonmalleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. ..."
Abstract

Cited by 454 (22 self)
 Add to MetaCart
The notion of nonmalleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. The same concept makes sense in the contexts of string commitment and zeroknowledge proofs of possession of knowledge. Nonmalleable schemes for each of these three problems are presented. The schemes do not assume a trusted center; a user need not know anything about the number or identity of other system users. Our cryptosystem is the first proven to be secure against a strong type of chosen ciphertext attack proposed by Rackoff and Simon, in which the attacker knows the ciphertext she wishes to break and can query the decryption oracle on any ciphertext other than the target.
Security and Privacy Aspects of LowCost Radio Frequency Identification Systems
, 2003
"... Like many technologies, lowcost Radio Frequency Identification (RFID) systems will become pervasive in our daily lives when affixed to everyday consumer items as "smart labels". While yielding great productivity gains, RFID systems may create new threats to the security and privacy of ..."
Abstract

Cited by 212 (5 self)
 Add to MetaCart
Like many technologies, lowcost Radio Frequency Identification (RFID) systems will become pervasive in our daily lives when affixed to everyday consumer items as "smart labels". While yielding great productivity gains, RFID systems may create new threats to the security and privacy of individuals or organizations. This paper presents a brief description of RFID systems and their operation. We describe privacy and security risks and how they apply to the unique setting of lowcost RFID devices. We propose several security mechanisms and suggest areas for future research.
On the (im)possibility of obfuscating programs
 Lecture Notes in Computer Science
, 2001
"... Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic an ..."
Abstract

Cited by 199 (10 self)
 Add to MetaCart
Informally, an obfuscator O is an (efficient, probabilistic) “compiler ” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible ” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexitytheoretic applications, ranging from software protection to homomorphic encryption to complexitytheoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility ” condition in obfuscation as meaning that O(P) is a “virtual black box, ” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P ′ that computes the same function as a program P ∈ P, the “source code ” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ P, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.
Correcting errors without leaking partial information
 In 37th Annual ACM Symposium on Theory of Computing (STOC
, 2005
"... This paper explores what kinds of information two parties must communicate in order to correct errors which occur in a shared secret string W. Any bits they communicate must leak a significant amount of information about W — that is, from the adversary’s point of view, the entropy of W will drop sig ..."
Abstract

Cited by 57 (9 self)
 Add to MetaCart
This paper explores what kinds of information two parties must communicate in order to correct errors which occur in a shared secret string W. Any bits they communicate must leak a significant amount of information about W — that is, from the adversary’s point of view, the entropy of W will drop significantly. Nevertheless, we construct schemes with which Alice and Bob can prevent an adversary from learning any useful information about W. Specifically, if the entropy of W is sufficiently high, then there is no function f(W) which the adversary can learn from the errorcorrection information with significant probability. This leads to several new results: (a) the design of noisetolerant “perfectly oneway” hash functions in the sense of Canetti et al. [7], which in turn leads to obfuscation of proximity queries for high entropy secrets W; (b) private fuzzy extractors [11], which allow one to extract uniformly random bits from noisy and nonuniform data W, while also insuring that no sensitive information about W is leaked; and (c) noise tolerance and stateless key reuse in the Bounded Storage Model, resolving the main open problem of Ding [10]. The heart of our constructions is the design of strong randomness extractors with the property that the source W can be recovered from the extracted randomness and any string W ′ which is close to W.
Deterministic Encryption: Definitional Equivalences and Constructions without Random Oracles
, 2008
"... We strengthen the foundations of deterministic publickey encryption via definitional equivalences and standardmodel constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguish ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
We strengthen the foundations of deterministic publickey encryption via definitional equivalences and standardmodel constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguishability notion, and show them all equivalent. We then present a deterministic scheme for the secure encryption of uniformly and independently distributed messages based solely on the existence of trapdoor oneway permutations. We show a generalization of the construction that allows secure deterministic encryption of independent highentropy messages. Finally we show relations between deterministic and standard (randomized) encryption.
Positive results and techniques for obfuscation
 In EUROCRYPT ’04
, 2004
"... Abstract. Informally, an obfuscator O is an efficient, probabilistic “compiler” that transforms a program P into a new program O(P) with the same functionality as P, but such that O(P) protects any secrets that may be built into and used by P. Program obfuscation, if possible, would have numerous im ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Abstract. Informally, an obfuscator O is an efficient, probabilistic “compiler” that transforms a program P into a new program O(P) with the same functionality as P, but such that O(P) protects any secrets that may be built into and used by P. Program obfuscation, if possible, would have numerous important cryptographic applications, including: (1) “Intellectual property ” protection of secret algorithms and keys in software, (2) Solving the longstanding open problem of homomorphic publickey encryption, (3) Controlled delegation of authority and access, (4) Transforming PrivateKey Encryption into PublicKey Encryption, and (5) Access Control Systems. Unfortunately however, program obfuscators that work on arbitrary programs cannot exist [1]. No positive results for program obfuscation were known prior to this work. In this paper, we provide the first positive results in program obfuscation. We focus on the goal of access control, and give several provable obfuscations for complex access control functionalities, in the random oracle model. Our results are obtained through nontrivial compositions of obfuscations; we note that general composition of obfuscations is impossible, and so developing techniques for composing obfuscations is an important goal. Our work can also be seen as making initial progress toward the goal of obfuscating finite automata or regular expressions, an important general class of machines which are not ruled out by the impossibility results of [1]. We also note that our work provides the first formal proof techniques for obfuscation, which we expect to be useful in future work in this area. 1
Entropic Security and the Encryption of High Entropy Messages
"... We study entropic security, an informationtheoretic notion of security introduced by Russell and Wang [24] in the context of encryption and by Canetti et al. [5, 6] in the context of hash functions. Informally, a probabilitic map Y = E(X) (e.g., an encryption sheme or a hash function) is entropica ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
We study entropic security, an informationtheoretic notion of security introduced by Russell and Wang [24] in the context of encryption and by Canetti et al. [5, 6] in the context of hash functions. Informally, a probabilitic map Y = E(X) (e.g., an encryption sheme or a hash function) is entropically secure if knowledge of Y does not help predicting any predicate of X, whenever X has high minentropy from the adversaryâs point of view. On one hand, we strengthen the formulation of [5, 6, 24] and show that entropic security in fact implies that Y does not help predicting any function of X (as opposed to a predicate), bringing this notion closer to the conventioonal notion of semantic security [10]. On the other hand, we also show that entropic security is equivalent to indistinguishability on pairs of input distributions of sufficiently high entropy, which is in turn related to randomness extraction from nonuniform distributions [21]. We then use the equivalence above, and the connection to randomness extraction, to prove several new results on entropicallysecure encryption. First, we give two general frameworks for constructing entropically secure encryption schemes: one based on expander graphs and the other on XORuniversal hash functions. These schemes generalize the schemes of Russell and Wang, yielding simpler constructions and proofs, as well as improved parameters. To encrypt an nbit message of minentropy t while allowing at most Éadvantage to the adversary, our best schemes use a shared secret key of length k = n â t + 2 log () 1. Second, we obtain lower
On the impossibility of obfuscation with auxiliary input
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05
, 2005
"... Barak et al. formalized the notion of obfuscation, and showed that there exist (contrived) classes of functions that cannot be obfuscated. In contrast, Canetti and Wee showed how to obfuscate point functions, under various complexity assumptions. Thus, it would seem possible that most programs of in ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Barak et al. formalized the notion of obfuscation, and showed that there exist (contrived) classes of functions that cannot be obfuscated. In contrast, Canetti and Wee showed how to obfuscate point functions, under various complexity assumptions. Thus, it would seem possible that most programs of interest can be obfuscated even though in principle general purpose obfuscators do not exist. We show that this is unlikely to be the case. In particular, we consider the notion of obfuscation w.r.t. auxiliary input, which corresponds to the setting where the adversary, which is given the obfuscated circuit, may have some additional a priori information. This is essentially the case of interest in any usage of obfuscation we can imagine. We prove that there exist many natural classes of functions that cannot be obfuscated w.r.t. auxiliary input, both when the auxiliary input is dependent of the function being obfuscated and even when the auxiliary input is independent of the function being obfuscated. We also give a positive result. In particular, we show that any obfuscator for the class of point functions is also an obfuscator w.r.t. independent auxiliary input. 1