Results 1  10
of
71
How to Go Beyond the BlackBox Simulation Barrier
 In 42nd FOCS
, 2001
"... The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this honest party. Almost all known simulators use the adversary’s algorithm as a blackbox. We present t ..."
Abstract

Cited by 240 (13 self)
 Add to MetaCart
The simulation paradigm is central to cryptography. A simulator is an algorithm that tries to simulate the interaction of the adversary with an honest party, without knowing the private input of this honest party. Almost all known simulators use the adversary’s algorithm as a blackbox. We present the first constructions of nonblackbox simulators. Using these new nonblackbox techniques we obtain several results that were previously proven to be impossible to obtain using blackbox simulators. Specifically, assuming the existence of collision resistent hash functions, we construct a new zeroknowledge argument system for NP that satisfies the following properties: 1. This system has a constant number of rounds with negligible soundness error. 2. It remains zero knowledge even when composed concurrently n times, where n is the security parameter. Simultaneously obtaining 1 and 2 has been recently proven to be impossible to achieve using blackbox simulators. 3. It is an ArthurMerlin (public coins) protocol. Simultaneously obtaining 1 and 3 was known to be impossible to achieve with a blackbox simulator. 4. It has a simulator that runs in strict polynomial time, rather than in expected polynomial time. All previously known constantround, negligibleerror zeroknowledge arguments utilized expected polynomialtime simulators.
Lower bounds on the Efficiency of Generic Cryptographic Constructions
 41ST IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), IEEE
, 2000
"... A central focus of modern cryptography is the construction of efficient, “highlevel” cryptographic tools (e.g., encryption schemes) from weaker, “lowlevel ” cryptographic primitives (e.g., oneway functions). Of interest are both the existence of such constructions, and their efficiency. Here, we ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
(Show Context)
A central focus of modern cryptography is the construction of efficient, “highlevel” cryptographic tools (e.g., encryption schemes) from weaker, “lowlevel ” cryptographic primitives (e.g., oneway functions). Of interest are both the existence of such constructions, and their efficiency. Here, we show essentiallytight lower bounds on the best possible efficiency of any blackbox construction of some fundamental cryptographic tools from the most basic and widelyused cryptographic primitives. Our results hold in an extension of the model introduced by Impagliazzo and Rudich, and improve and extend earlier results of Kim, Simon, and Tetali. We focus on constructions of pseudorandom generators, universal oneway hash functions, and digital signatures based on oneway permutations, as well as constructions of public and privatekey encryption schemes based on trapdoor permutations. In each case, we show that any blackbox construction beating our efficiency bound would yield the unconditional existence of a oneway function and thus, in particular, prove P != NP.
Notions of Reducibility between Cryptographic Primitives
, 2004
"... Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions. ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions.
General Composition and Universal Composability in Secure Multiparty Computation
, 2007
"... Concurrent general composition relates to a setting where a secure protocol is run in anetwork concurrently with other, arbitrary protocols. Clearly, security in such a setting is what is desired, or even needed, in modern computer networks where many different protocols areexecuted concurrently. Ca ..."
Abstract

Cited by 53 (9 self)
 Add to MetaCart
(Show Context)
Concurrent general composition relates to a setting where a secure protocol is run in anetwork concurrently with other, arbitrary protocols. Clearly, security in such a setting is what is desired, or even needed, in modern computer networks where many different protocols areexecuted concurrently. Canetti (FOCS 2001) introduced the notion of universal composability, and showed that security under this definition is sufficient for achieving concurrent generalcomposition. However, it is not known whether or not the opposite direction also holds. Our main result is a proof that security under concurrent general composition, when interpreted in the natural way under the simulation paradigm, is equivalent to a variant of universal composability, where the only difference relates to the order of quantifiers in the definition. (Innewer versions of universal composability, these variants are equivalent.) An important corollary of this theorem is that existing impossibility results for universal composability (for all itsvariants) are inherent for definitions that imply security under concurrent general composition, as formulated here. In particular, there are large classes of twoparty functionalities for whichit is impossible to obtain protocols (in the plain model) that remain secure under concurrent general composition. We stress that the impossibility results obtained are not &quot;blackbox&quot;, andapply even to nonblackbox simulation. Our main result also demonstrates that the definition of universal composability is somewhat&quot;minimal&quot;, in that the composition guarantee provided by universal composability implies the definition itself. This indicates that the security definition of universal composability is notoverly restrictive.
Protocols for BoundedConcurrent Secure TwoParty Computation in the Plain Model
, 2006
"... Until recently, most research on the topic of secure computation focused on the standalonemodel, where a single protocol execution takes place. In this paper, we construct protocols for the setting of boundedconcurrent selfcomposition, where a (single) secure protocol is run manytimes concurrent ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
Until recently, most research on the topic of secure computation focused on the standalonemodel, where a single protocol execution takes place. In this paper, we construct protocols for the setting of boundedconcurrent selfcomposition, where a (single) secure protocol is run manytimes concurrently, and there is a predetermined bound on the number of concurrent executions. In short, we show that any twoparty functionality can be securely computed under boundedconcurrent selfcomposition, in the
On robust combiners for oblivious transfer and other primitives
 In Proc. Eurocrypt ’05
, 2005
"... At the mouth of two witnesses... shall the matter be establishedDeuteronomy Chapter 19. ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
(Show Context)
At the mouth of two witnesses... shall the matter be establishedDeuteronomy Chapter 19.
Strengthening ZeroKnowledge Protocols using Signatures
 IN PROCEEDINGS OF EUROCRYPT ’03, LNCS SERIES
, 2003
"... Recently there has been an interest in zeroknowledge protocols with stronger properties, such as concurrency, unbounded simulation soundness, nonmalleability, and universal composability. In this paper, ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Recently there has been an interest in zeroknowledge protocols with stronger properties, such as concurrency, unbounded simulation soundness, nonmalleability, and universal composability. In this paper,
ZeroKnowledge twenty years after its invention
 Electronic Colloquium on Computational Complexity (http://www.eccc.unitrier.de/eccc/), Report No
, 2002
"... Zeroknowledge proofs are proofs that are both convincing and yet yield nothing beyond the validity of the assertion being proven. Since their introduction about twenty years ago, zeroknowledge proofs have attracted a lot of attention and have, in turn, contributed to the development of other ar ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
(Show Context)
Zeroknowledge proofs are proofs that are both convincing and yet yield nothing beyond the validity of the assertion being proven. Since their introduction about twenty years ago, zeroknowledge proofs have attracted a lot of attention and have, in turn, contributed to the development of other areas of cryptography and complexity theory.
Round Efficiency of MultiParty Computation with a Dishonest Majority
 In Eurocrypt ’03, 2003. LNCS
, 2003
"... Abstract. We consider the round complexity of multiparty computation in the presence of a static adversary who controls a majority of the parties. Here, n players wish to securely compute some functionality and up to n − 1 of these players may be arbitrarily malicious. Previous protocols for this s ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the round complexity of multiparty computation in the presence of a static adversary who controls a majority of the parties. Here, n players wish to securely compute some functionality and up to n − 1 of these players may be arbitrarily malicious. Previous protocols for this setting (when a broadcast channel is available) require O(n) rounds. We present two protocols with improved round complexity: The first assumes only the existence of trapdoor permutations and dense cryptosystems, and achieves round complexity O(log n) based on a proof scheduling technique of Chor and Rabin [13]; the second requires a stronger hardness assumption (along with the nonblackbox techniques of Barak [2]) and achieves O(1) round complexity. 1
On basing oneway functions on NPhardness
 In Proceedings of the ThirtyEighth Annual ACM Symposium on Theory of Computing
, 2006
"... We consider the question of whether it is possible to base the existence of oneway functions on N Phardness. That is we study the the possibility of reductions from a worstcase N Phard decision problem to the task of inverting a polynomial time computable function. We prove two negative results: ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We consider the question of whether it is possible to base the existence of oneway functions on N Phardness. That is we study the the possibility of reductions from a worstcase N Phard decision problem to the task of inverting a polynomial time computable function. We prove two negative results: 1. For any polynomial time computable function f: the existence of a randomized nonadaptive reduction of worst case N P problems to the task of averagecase inverting f implies that coN P ⊆ AM. It is widely believed that coN P is not contained in AM. Thus, this result may be regarded as showing that such reductions cannot exist (unless coN P ⊆ AM). This result improves previous negative results that placed coN P in nonuniform AM. 2. For any polynomial time computable function f for which it is possible to efficiently compute preimage sizes (i.e., f −1 (y)  for a given y): the existence of a randomized reduction of worst case N P problems to the task of inverting f implies that coN P ⊆ AM. Moreover, this is also true for functions for which it is possible to verify (via and AM protocol) the approximate size of preimage sizes (i.e., f −1 (y)  for a given y). These results holds for any reduction, including adaptive ones. The previously known negative results regarding worstcase to averagecase reductions were confined to nonadaptive reductions. In the course of proving the above results, two new AM protocols emerge for proving upper bounds on the sizes of N P sets. Whereas the known lower bound protocol on set sizes by [GoldwasserSipser] works for any N P set, the known upper bound protocol on set sizes by [AielloHastad] works in a setting where the verifier knows a random secret element (unknown to the prover) in the N P set. The new protocols we develop here, each work under different requirements than that of [AielloHastad], enlarging the settings in which it is possible to prove upper bounds on N P set size.