Results 1  10
of
241
NonMalleable Cryptography
 SIAM Journal on Computing
, 2000
"... The notion of nonmalleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. ..."
Abstract

Cited by 454 (22 self)
 Add to MetaCart
The notion of nonmalleable cryptography, an extension of semantically secure cryptography, is defined. Informally, in the context of encryption the additional requirement is that given the ciphertext it is impossible to generate a different ciphertext so that the respective plaintexts are related. The same concept makes sense in the contexts of string commitment and zeroknowledge proofs of possession of knowledge. Nonmalleable schemes for each of these three problems are presented. The schemes do not assume a trusted center; a user need not know anything about the number or identity of other system users. Our cryptosystem is the first proven to be secure against a strong type of chosen ciphertext attack proposed by Rackoff and Simon, in which the attacker knows the ciphertext she wishes to break and can query the decryption oracle on any ciphertext other than the target.
Universal OneWay Hash Functions and their Cryptographic Applications
, 1989
"... We define a Universal OneWay Hash Function family, a new primitive which enables the compression of elements in the function domain. The main property of this primitive is that given an element x in the domain, it is computationally hard to find a different domain element which collides with x. We ..."
Abstract

Cited by 316 (13 self)
 Add to MetaCart
We define a Universal OneWay Hash Function family, a new primitive which enables the compression of elements in the function domain. The main property of this primitive is that given an element x in the domain, it is computationally hard to find a different domain element which collides with x. We prove constructively that universal oneway hash functions exist if any 11 oneway functions exist. Among the various applications of the primitive is a OneWay based Secure Digital Signature Scheme which is existentially secure against adoptive attacks. Previously, all provably secure signature schemes were based on the stronger mathematical assumption that trapdoor oneway functions exist. Key words. cryptography, randomized algorithms AMS subject classifications. 68M10, 68Q20, 68Q22, 68R05, 68R10 Part of this work was done while the authors were at the IBM Almaden Research Center. The first author was supported in part by NSF grant CCR88 13632. A preliminary version of this work app...
How to Go Beyond the BlackBox Simulation Barrier
 In 42nd FOCS
, 2001
"... The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this honest party. Almost all known simulators use the adversary’s algorithm as a blackbox. We present t ..."
Abstract

Cited by 221 (14 self)
 Add to MetaCart
The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this honest party. Almost all known simulators use the adversary’s algorithm as a blackbox. We present the first constructions of nonblackbox simulators. Using these new nonblackbox techniques we obtain several results that were previously proven to be impossible to obtain using blackbox simulators. Specifically, assuming the existence of collision resistent hash functions, we construct a new zeroknowledge argument system for NP that satisfies the following properties: 1. This system has a constant number of rounds with negligible soundness error. 2. It remains zero knowledge even when composed concurrently n times, where n is the security parameter. Simultaneously obtaining 1 and 2 has been recently proven to be impossible to achieve using blackbox simulators. 3. It is an ArthurMerlin (public coins) protocol. Simultaneously obtaining 1 and 3 was known to be impossible to achieve with a blackbox simulator. 4. It has a simulator that runs in strict polynomial time, rather than in expected polynomial time. All previously known constantround, negligibleerror zeroknowledge arguments utilized expected polynomialtime simulators.
How to Construct ConstantRound ZeroKnowledge Proof Systems for NP
 Journal of Cryptology
, 1995
"... Constantround zeroknowledge proof systems for every language in NP are presented, assuming the existence of a collection of clawfree functions. In particular, it follows that such proof systems exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for ..."
Abstract

Cited by 160 (8 self)
 Add to MetaCart
Constantround zeroknowledge proof systems for every language in NP are presented, assuming the existence of a collection of clawfree functions. In particular, it follows that such proof systems exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for Blum Integers.
Pseudonym Systems
, 1999
"... Pseudonym systems allow users to interact with multiple organizations anonymously, using pseudonyms. The pseudonyms cannot be linked, but are formed in such a way that a user can prove to one organization a statement about his relationship with another. Such statement is called a credential. Previou ..."
Abstract

Cited by 121 (12 self)
 Add to MetaCart
Pseudonym systems allow users to interact with multiple organizations anonymously, using pseudonyms. The pseudonyms cannot be linked, but are formed in such a way that a user can prove to one organization a statement about his relationship with another. Such statement is called a credential. Previous work in this area did not protect the system against dishonest users who collectively use their pseudonyms and credentials, i.e. share an identity. Previous practical schemes also relied very heavily on the involvement of a trusted center. In the present paper we give a formal definition of pseudonym systems where users are motivated not to share their identity, and in which the trusted center's involvement is minimal. We give theoretical constructions for such systems based on any oneway function. We also suggest an efficient and easy to implement practical scheme. This is joint work with Ronald L. Rivest and Amit Sahai.
On the Concurrent Composition of ZeroKnowledge Proofs
 In EuroCrypt99, Springer LNCS 1592
, 1999
"... Abstract. We examine the concurrent composition of zeroknowledge proofs. By concurrent composition, we indicate a single prover that is involved in multiple, simultaneous zeroknowledge proofs with one or multiple verifiers. Under this type of composition it is believed that standard zeroknowledge ..."
Abstract

Cited by 115 (3 self)
 Add to MetaCart
Abstract. We examine the concurrent composition of zeroknowledge proofs. By concurrent composition, we indicate a single prover that is involved in multiple, simultaneous zeroknowledge proofs with one or multiple verifiers. Under this type of composition it is believed that standard zeroknowledge protocols are no longer zeroknowledge. We show that, modulo certain complexity assumptions, any statement in NP has k ɛround proofs and arguments in which one can efficiently simulate any k O(1) concurrent executions of the protocol.
BlackBox Concurrent ZeroKnowledge Requires (almost) Logarithmically Many Rounds
 SIAM Journal on Computing
, 2002
"... We show that any concurrent zeroknowledge protocol for a nontrivial language (i.e., for a language outside BPP), whose security is proven via blackbox simulation, must use at least ~ \Omega\Gamma/10 n) rounds of interaction. This result achieves a substantial improvement over previous lower bound ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
We show that any concurrent zeroknowledge protocol for a nontrivial language (i.e., for a language outside BPP), whose security is proven via blackbox simulation, must use at least ~ \Omega\Gamma/10 n) rounds of interaction. This result achieves a substantial improvement over previous lower bounds, and is the first bound to rule out the possibility of constantround concurrent zeroknowledge when proven via blackbox simulation. Furthermore, the bound is polynomially related to the number of rounds in the best known concurrent zeroknowledge protocol for languages in NP (which is established via blackbox simulation).
An Efficient Protocol for Secure TwoParty Computation in the Presence of Malicious Adversaries
 In EUROCRYPT 2007, SpringerVerlag (LNCS 4515
, 2007
"... We show an efficient secure twoparty protocol, based on Yao’s construction, which provides security against malicious adversaries. Yao’s original protocol is only secure in the presence of semihonest adversaries, and can be transformed into a protocol that achieves security against malicious adver ..."
Abstract

Cited by 75 (10 self)
 Add to MetaCart
We show an efficient secure twoparty protocol, based on Yao’s construction, which provides security against malicious adversaries. Yao’s original protocol is only secure in the presence of semihonest adversaries, and can be transformed into a protocol that achieves security against malicious adversaries by applying the compiler of Goldreich, Micali and Wigderson (the “GMW compiler”). However, this approach does not seem to be very practical as it requires using generic zeroknowledge proofs. Our construction is based on applying cutandchoose techniques to the original circuit and inputs. Security is proved according to the ideal/real simulation paradigm, and the proof is in the standard model (with no random oracle model or common reference string assumptions). The resulting protocol is computationally efficient: the only usage of asymmetric cryptography is for running O(1) oblivious transfers for each input bit (or for each bit of a statistical security parameter, whichever is larger). Our protocol combines techniques from folklore (like cutandchoose) along with new techniques for efficiently proving consistency of inputs. We remark that a naive implementation of the cutandchoose technique with Yao’s protocol does not yield a
Attested appendonly memory: Making adversaries stick to their word
 In Proc. of SOSP
, 2007
"... Researchers have made great strides in improving the fault tolerance of both centralized and replicated systems against arbitrary (Byzantine) faults. However, there are hard limits to how much can be done with entirely untrusted components; for example, replicated state machines cannot tolerate more ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
Researchers have made great strides in improving the fault tolerance of both centralized and replicated systems against arbitrary (Byzantine) faults. However, there are hard limits to how much can be done with entirely untrusted components; for example, replicated state machines cannot tolerate more than a third of their replica population being Byzantine. In this paper, we investigate how minimal trusted abstractions can push through these hard limits in practical ways. We propose Attested AppendOnly Memory (A2M), a trusted system facility that is small, easy to implement and easy to verify formally. A2M provides the programming abstraction of a trusted log, which leads to protocol designs immune to equivocation – the ability of a faulty host to lie in different ways to different clients or servers – which is a common source of Byzantine headaches. Using A2M, we improve upon the state of the art in Byzantinefault tolerant replicated state machines, producing A2Menabled protocols (variants of Castro and Liskov’s PBFT) that remain correct (linearizable) and keep making progress (live) even when half the replicas are faulty, in contrast to the previous upper bound. We also present an A2Menabled singleserver shared storage protocol that guarantees linearizability despite server faults. We implement A2M and our protocols, evaluate them experimentally through micro and macrobenchmarks, and argue that the improved fault tolerance is costeffective for a broad range of uses, opening up new avenues for practical, more reliable services.