Results 1 - 10
of
65
Pseudorandom generators without the XOR Lemma (Extended Abstract)
, 1998
"... Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of har ..."
Abstract
-
Cited by 138 (23 self)
- Add to MetaCart
Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan-- Wigderson [NW94] generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardness-randomness trade-offs. Our first result is that when (a modified version of) the NisanWigderson generator construction is applied with a "mildly" hard predicate, the result is a generator that produces a distribution indistinguishable from having large min-entropy. An extractor can then be used to produce a distribution computationally indistinguishable from uniform. This is the first construction of a pseudorandom generator that works with a mildly hard predicate without doing hardness amplification. We then show that in the Impagliazzo--Wigderson construction only the first hardness-amplification phase (encoding with multivariate polynomial) is necessary, since it already gives the required average-case hardness. We prove this result by (i) establishing a connection between the hardness-amplification problem and a listdecoding...
Extractors and Pseudorandom Generators
- Journal of the ACM
, 1999
"... We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. ..."
Abstract
-
Cited by 104 (6 self)
- Add to MetaCart
(Show Context)
We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.
Proofs of retrievability via hardness amplification
- In TCC
, 2009
"... Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the cl ..."
Abstract
-
Cited by 84 (4 self)
- Add to MetaCart
Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we • Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary. • Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08]. • Build the first bounded-use scheme with information-theoretic security. The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of PoR codes, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
- SIAM Journal on Computing
, 2004
"... A strong direct product theorem says that if we want to compute k independent instances of a function, using less than k times the resources needed for one instance, then our overall success probability will be exponentially small in k. We establish such theorems for the classical as well as quantum ..."
Abstract
-
Cited by 65 (13 self)
- Add to MetaCart
A strong direct product theorem says that if we want to compute k independent instances of a function, using less than k times the resources needed for one instance, then our overall success probability will be exponentially small in k. We establish such theorems for the classical as well as quantum query complexity of the OR function. This implies slightly weaker direct product results for all total functions. We prove a similar result for quantum communication protocols computing k instances of the Disjointness function. Our direct product theorems...
Towards Proving Strong Direct Product Theorems
- Computational Complexity
, 2001
"... A fundamental question of complexity theory is the direct product question. Namely weather the assumption that a function f is hard on average for some computational class (meaning that every algorithm from the class has small advantage over random guessing when computing f) entails that computin ..."
Abstract
-
Cited by 51 (1 self)
- Add to MetaCart
(Show Context)
A fundamental question of complexity theory is the direct product question. Namely weather the assumption that a function f is hard on average for some computational class (meaning that every algorithm from the class has small advantage over random guessing when computing f) entails that computing f on k independently chosen inputs is exponentially harder on average. A famous example is Yao's XOR-lemma, [Yao82] which gives such a result for boolean circuits. This question has also been studied in other computational models, such as decision trees [NRS94], and communication complexity [PRW97]. In Yao's XOR-lemma one assumes f is hard on average for circuits of size s and concludes that f #k (x 1 , , x k ) = f(x 1 ) # # f(x k ) is essentially exponentially harder on average for circuits of size s # . All known proofs of this lemma, [Lev85, Imp95, IW97, GNW95] have the feature that s # < s. In words, the circuit which attempts to compute f #k is smaller than the circuit whic...
Pseudorandomness and average-case complexity via uniform reductions
- IN PROCEEDINGS OF THE 17TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudor ..."
Abstract
-
Cited by 51 (7 self)
- Add to MetaCart
Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. In this paper: ◦ We obtain an optimal worst-case to average-case connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACE-complete self-correctible and downward self-reducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #P-complete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “black-box” uniform reductions.
Hardness Amplification within NP
, 2002
"... In this paper we investigate the following question: If NP is slightly hard on average, is it very hard on average? We show the answer is yes; if there is a function in NP which is infinitely often balanced and (1-1/poly(n))-hard for circuits of polynomial size, then there is a function in NP which ..."
Abstract
-
Cited by 48 (1 self)
- Add to MetaCart
In this paper we investigate the following question: If NP is slightly hard on average, is it very hard on average? We show the answer is yes; if there is a function in NP which is infinitely often balanced and (1-1/poly(n))-hard for circuits of polynomial size, then there is a function in NP which is infinitely often ( )-hard for circuits of polynomial size. Our proof technique is to generalize the Yao XOR Lemma, allowing us to characterize nearly tightly the hardness of a composite function g(f(x_1), ..., f(x_n)) in terms of: (i) the original hardness of f, and (ii) the expected bias of the function g when subjected to random restrictions. The computational result we prove essentially matches an information-theoretic bound.
List-Decoding Using The XOR Lemma
"... We show that Yao's XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be re-interpreted as a way of obtaining error-correcting codes with good list-decoding algorithms from error-correcting codes having weak unique-decoding algorithms. To get codes with good rat ..."
Abstract
-
Cited by 39 (4 self)
- Add to MetaCart
(Show Context)
We show that Yao's XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be re-interpreted as a way of obtaining error-correcting codes with good list-decoding algorithms from error-correcting codes having weak unique-decoding algorithms. To get codes with good rate and efficient list decoding algorithms one needs a proof of the Direct Product Lemma that, respectively, is strongly derandomized, and uses very small advice.We show how to reduce advice in Impagliazzo's proof of the Direct Product Lemma for pairwise independent inputs,which leads to error-correcting codes with O(n²) encoding length, ~ O(n2) encoding time, and probabilistic ~O(n) list-decoding time. (Note that the decoding time is sub-linear in the length of the encoding.)Back to complexity theory, our advice-efficient proof of Impagliazzo's &quot;hard-core set &quot; results yields a (weak) uniform version of O'Donnell results on amplification of hardness in NP. We show that if there is a problem in NP that cannot be solved by BPP algorithms on more than a 1- 1/(log n)c fraction of inputs, then there is a problem in NP that cannot be solved by BPP algorithms on more than a 3/4 + 1/(log n)c fraction of inputs, where c> 0 is an absolute constant.
Using Nondeterminism to Amplify Hardness
, 2004
"... We revisit the problem of hardness amplification in N P, as recently studied by O’Donnell (STOC ‘02). We prove that if N P has a balanced function f such that any circuit of size s(n) fails to compute f on a 1 / poly(n) fraction of inputs, then N P has a function f ′ such that any circuit of size s ..."
Abstract
-
Cited by 35 (6 self)
- Add to MetaCart
We revisit the problem of hardness amplification in N P, as recently studied by O’Donnell (STOC ‘02). We prove that if N P has a balanced function f such that any circuit of size s(n) fails to compute f on a 1 / poly(n) fraction of inputs, then N P has a function f ′ such that any circuit of size s ′ (n) = s ( √ n) Ω(1) fails to compute f ′ on a 1/2−1/s ′ (n) fraction of inputs. In particular, 1. If s(n) = n ω(1) , we amplify to hardness 1/2 − 1/n ω(1). 2. If s(n) = 2 nΩ(1), we amplify to hardness 1/2−1/2 nΩ(1) 3. If s(n) = 2 Ω(n) , we amplify to hardness 1/2−1/2 Ω( √ n). These improve the results of O’Donnell, which only amplified to 1/2 − 1 / √ n. O’Donnell also proved that no construction of a certain general form could amplify beyond 1/2 − 1/n. We bypass this barrier by using both derandomization and nondeterminism in the construction of f ′. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that f is balanced are necessary for “black-box ” hardness amplification procedures (such as ours).
Approximate list-decoding of direct product . . .
"... Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is, gi ..."
Abstract
-
Cited by 33 (8 self)
- Add to MetaCart
(Show Context)
Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is, given oracle access to a word which agrees with a k-wise direct product encoding of some message msg ∈ {0, 1} N in at least ɛ � poly(1/k) fraction of positions, our algorithm outputs a list of poly(1/ɛ) strings that contains at least one string msg ′ which is equal to msg in all but at most k −Ω(1) fraction of positions. The decoding is local in that our algorithm outputs a list of Boolean circuits so that the jth bit of the ith output string can be computed by running the ith circuit on input j. The running time of the algorithm is polynomial in log N and 1/ɛ. In general, when ɛ> e−kα for a sufficiently small constant α> 0, we get a randomized approximate list-decoding algorithm that runs in time quasipolynomial in 1/ɛ, i.e., (1/ɛ) poly log 1/ɛ. As an application of our decoding algorithm, we get uniform hardness amplification for PNP�, the class of languages reducible to NP through one round of parallel oracle queries: If there is a language in PNP � that cannot be decided by any BPP algorithm on more that 1 − 1/nΩ(1) fraction of inputs, then there is another language in P NP � that cannot be decided by any BPP algorithm on more that 1/2 + 1/nω(1) fraction of inputs.