Results 1 
9 of
9
Hardness vs. randomness
 Journal of Computer and System Sciences
, 1994
"... We present a simple new construction of a pseudorandom bit generator, based on the constant depth generators of [N]. It stretches a short string of truly random bits into a long string that looks random to any algorithm from a complexity class C (eg P, NC, PSPACE,...) using an arbitrary function tha ..."
Abstract

Cited by 291 (30 self)
 Add to MetaCart
We present a simple new construction of a pseudorandom bit generator, based on the constant depth generators of [N]. It stretches a short string of truly random bits into a long string that looks random to any algorithm from a complexity class C (eg P, NC, PSPACE,...) using an arbitrary function that is hard for C. This construction reveals an equivalence between the problem of proving lower bounds and the problem of generating good pseudorandom sequences. Our construction has many consequences. The most direct one is that efficient deterministic simulation of randomized algorithms is possible under much weaker assumptions than previously known. The efficiency ofthe simulations depends on the strength of the assumptions, and may achieve P =BPP. Webelieve that our results are very strong evidence that the gap between randomized and deterministic complexity is not large. Using the known lower bounds for constant depth circuits, our construction yields an unconditionally proven pseudorandom generator for constant depth circuits. As an application of this generator we characterize the power of NP with a random oracle. 1.
How to Recycle Random Bits
, 1989
"... We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + ..."
Abstract

Cited by 183 (11 self)
 Add to MetaCart
We show that modified versions of the linear congruential generator and the shift register generator are provably good for amplifying the correctness of a probabilistic algorithm. More precisely, if r random bits are needed for a BPP algorithm to be correct with probability at least 2=3, then O(r + k 2 ) bits are needed to improve this probability to 1 \Gamma 2 \Gammak . We also present a different pseudorandom generator that is optimal, up to a constant factor, in this regard: it uses only O(r + k) bits to improve the probability to 1 \Gamma 2 \Gammak . This generator is based on random walks on expanders. Our results do not depend on any unproven assumptions. Next we show that our modified versions of the shift register and linear congruential generators can be used to sample from distributions using, in the limit, the informationtheoretic lower bound on random bits. 1. Introduction Randomness plays a vital role in almost all areas of computer science, both in theory and in...
Software Protection and Simulation on Oblivious RAMs
, 1993
"... Software protection is one of the most important issues concerning computer practice. There exist many heuristics and adhoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper we provide theoretical treatment of software protectio ..."
Abstract

Cited by 169 (14 self)
 Add to MetaCart
Software protection is one of the most important issues concerning computer practice. There exist many heuristics and adhoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if the sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, it is independent of the actual input.) What is the slowdown in the running time of any machine, if it is required to be oblivious? In 1979 Pippenger and Fischer showed how a twotape oblivious Turing Machine can simulate, online, a onetape Turing Machine, with a logarithmic slowdown in the running time. We s...
Foundations of Cryptography (Fragments of a Book)
, 1995
"... this paper date to early 1983. Yet, the paper, being rejected three times from major conferences, has first appeared in public only in 1985, concurrently to the paper of Babai [B85].) A restricted form of interactive proofs, known by the name Arthur Mer'lin Games, was introduced by Babai [B85]. (The ..."
Abstract

Cited by 141 (21 self)
 Add to MetaCart
this paper date to early 1983. Yet, the paper, being rejected three times from major conferences, has first appeared in public only in 1985, concurrently to the paper of Babai [B85].) A restricted form of interactive proofs, known by the name Arthur Mer'lin Games, was introduced by Babai [B85]. (The restricted form turned out to be equivalent in power see Section [mssng(effp.sec)].) The interactive proof for Graph NonIsomorphism is due to Goldreich, Micali and Wigderson The concept of zeroknowledge has been introduced by Goldwasser, Micali and Rackoff, in the same paper quoted above [R85]. Their paper contained also a perfect zeroknowledge proof for Quadratic Non Residuousity. The perfect zeroknowledge proof system for Graph Isomorphism is due to Goldreich, Micali and Wigderson [W86]. The latter paper is also the source to the zeroknowledge proof systems for all languages in 2V72, using any (nonunifomly) oneway function. (Brassard and Crapeau have later' constructed alternative zeroknowledge proof systems for 2V72, using a stronger' intractability assumption, specifically the intractability of the Quadratic Residuousity Problem.) The cryptographic applications of zeroknowledge proofs were the very motivation for their presentation in [R85]. Zeroknowledge proofs were applied to solve cryptographic problems in [FRW85] and [CF85]. However, many more applications were possible once it was shown how to construct zeroknowledge proof systems for every language in In particular, general methodologies for the construction of cryptographic protocols have appeared in [6MW86,GW87]
HardCore Distributions for Somewhat Hard Problems
 In 36th Annual Symposium on Foundations of Computer Science
, 1995
"... Consider a decision problem that cannot be 1 \Gamma ffi approximated by circuits of a given size in the sense that any such circuit fails to give the correct answer on at least a ffi fraction of instances. We show that for any such problem there is a specific "hardcore" set of inputs which is at le ..."
Abstract

Cited by 118 (15 self)
 Add to MetaCart
Consider a decision problem that cannot be 1 \Gamma ffi approximated by circuits of a given size in the sense that any such circuit fails to give the correct answer on at least a ffi fraction of instances. We show that for any such problem there is a specific "hardcore" set of inputs which is at least a ffi fraction of all inputs and on which no circuit of a slightly smaller size can get even a small advantage over a random guess. More generally, our argument holds for any nonuniform model of computation closed under majorities. We apply this result to get a new proof of the Yao XOR lemma [Y], and to get a related XOR lemma for inputs that are only kwise independent. 1 Introduction If you have a difficult computational problem, is it always the case that several independent instances of the problem are proportionately harder than a single instance? In particular, if any algorithm taking less than R resources has failure probability at least ffi for a particular problem on a certai...
Adaptively Secure Multiparty Computation
, 1996
"... A fundamental problem in designing secure multiparty protocols is how to deal with adaptive adversaries (i.e., adversaries that may choose the corrupted parties during the course of the computation), in a setting where the channels are insecure and secure communication is achieved by cryptographi ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
A fundamental problem in designing secure multiparty protocols is how to deal with adaptive adversaries (i.e., adversaries that may choose the corrupted parties during the course of the computation), in a setting where the channels are insecure and secure communication is achieved by cryptographic primitives based on the computational limitations of the adversary.
Studies in Secure Multiparty Computation and Applications
, 1996
"... Consider a set of parties who do not trust each other, nor the channels by which they communicate. Still, the parties wish to correctly compute some common function of their local inputs, while keeping their local data as private as possible. This, in a nutshell, is the problem of secure multiparty ..."
Abstract

Cited by 76 (8 self)
 Add to MetaCart
Consider a set of parties who do not trust each other, nor the channels by which they communicate. Still, the parties wish to correctly compute some common function of their local inputs, while keeping their local data as private as possible. This, in a nutshell, is the problem of secure multiparty computation. This problem is fundamental in cryptography and in the study of distributed computations. It takes many different forms, depending on the underlying network, on the function to be computed, and on the amount of distrust the parties have in each other and in the network. We study several aspects of secure multiparty computation. We first present new definitions of this problem in various settings. Our definitions draw from previous ideas and formalizations, and incorporate aspects that were previously overlooked. Next we study the problem of dealing with adaptive adversaries. (Adaptive adversaries are adversaries that corrupt parties during the course of the computation, based on...
Private Information Storage (Extended Abstract)
, 1996
"... We consider the setting of hiding information through the use of multiple databases that do not interact with one another. In this setting, there are k 2 "databases" which can be accessed by some "users". Users do not keep any state information, but wish to access O(n) bits of "data". Previously, ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We consider the setting of hiding information through the use of multiple databases that do not interact with one another. In this setting, there are k 2 "databases" which can be accessed by some "users". Users do not keep any state information, but wish to access O(n) bits of "data". Previously, in this setting solutions for retrieval of data in the efficient manner were given, where a user achieves this by interacting with all the databases. We consider the case of both writing and reading . While the case of reading was well studied before, the case of writing was previously completely open. In this paper, we show how to implement both read and write operations, with the following strong security guarantees: all the information about the read/write operation is informationtheoretically hidden from all the databases (i.e. both the value of the bit and the address of the bit). As in the previous papers, we measure, as a function of k and n the amount of communication ...
MicroPayments via Efficient CoinFlipping (Extended Abstract)
"... We present an authenticated coinflipping protocol and its proof of security. We demonstrate the applicability of our scheme for online randomized micropayment protocols. We also review some essential aspects of other micropayment proposals (including SET, PayWord and MicroMint, PayTree, NetCheque ..."
Abstract
 Add to MetaCart
We present an authenticated coinflipping protocol and its proof of security. We demonstrate the applicability of our scheme for online randomized micropayment protocols. We also review some essential aspects of other micropayment proposals (including SET, PayWord and MicroMint, PayTree, NetCheque, NetCash, Agora, NetCard, CAFE, Pederson's proposal, microiKP, Milicent, proposal of JareckiOdlyzko, proposal of Yacobi, SVP, DigiCash, Rivest's "Lottery tickets as MicroCash" and Wheeler's proposal) and compare it with our scheme.