Results 1 - 10
of
33
Proofs of retrievability via hardness amplification
- In TCC
, 2009
"... Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the cl ..."
Abstract
-
Cited by 84 (4 self)
- Add to MetaCart
Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we • Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary. • Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08]. • Build the first bounded-use scheme with information-theoretic security. The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of PoR codes, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
On Yao’s XOR lemma
- Electronic Colloquium on Computational Complexity
, 1995
"... Abstract. A fundamental lemma of Yao states that computational weakunpredictability of Boolean predicates is amplified when the results of several independent instances are XOR together. We survey two known proofs of Yao’s Lemma and present a third alternative proof. The third proof proceeds by firs ..."
Abstract
-
Cited by 65 (6 self)
- Add to MetaCart
(Show Context)
Abstract. A fundamental lemma of Yao states that computational weakunpredictability of Boolean predicates is amplified when the results of several independent instances are XOR together. We survey two known proofs of Yao’s Lemma and present a third alternative proof. The third proof proceeds by first proving that a function constructed by concatenating the values of the original function on several independent instances is much more unpredictable, with respect to specified complexity bounds, than the original function. This statement turns out to be easier to prove than the XOR-Lemma. Using a result of Goldreich and Levin (1989) and some elementary observation, we derive the XOR-Lemma.
Hardness amplification proofs require majority
- In Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC
, 2008
"... Hardness amplification is the fundamental task of converting a δ-hard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)-hard function Amp(f), where f is γ-hard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f i ..."
Abstract
-
Cited by 20 (5 self)
- Add to MetaCart
Hardness amplification is the fundamental task of converting a δ-hard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)-hard function Amp(f), where f is γ-hard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f is worst-case hard). Achieving ɛ = 1/n ω(1) is a prerequisite for cryptography and most pseudorandom-generator constructions. In this paper we study the complexity of black-box proofs of hardness amplification. A class of circuits D proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + ɛ fraction of the inputs there exists an oracle circuit D ∈ D such that D h agrees with f on a 1 − δ fraction of the inputs. We focus on the case where every D ∈ D makes non-adaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D “can be used ” to compute the majority function on 1/ɛ bits. In particular, these circuits have large depth when ɛ ≤ 1/poly log n. 2. The circuits in D must make Ω � log(1/δ)/ɛ 2 � oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors.
Uniform direct product theorems: Simplified, unified and derandomized
, 2007
"... The classical Direct-Product Theorem for circuits says that if a Boolean function f: {0, 1} n → {0, 1} is somewhat hard to compute on average by small circuits, then the corresponding k-wise direct product function f k (x1,..., xk) = (f(x1),..., f(xk)) (where each xi ∈ {0, 1} n) is significantly ha ..."
Abstract
-
Cited by 19 (4 self)
- Add to MetaCart
(Show Context)
The classical Direct-Product Theorem for circuits says that if a Boolean function f: {0, 1} n → {0, 1} is somewhat hard to compute on average by small circuits, then the corresponding k-wise direct product function f k (x1,..., xk) = (f(x1),..., f(xk)) (where each xi ∈ {0, 1} n) is significantly harder to compute on average by slightly smaller circuits. We prove a fully uniform version of the Direct-Product Theorem with information-theoretically optimal parameters, up to constant factors. Namely, we show that for given k and ɛ, there is an efficient randomized algorithm A with the following property. Given a circuit C that computes f k on at least ɛ fraction of inputs, the algorithm A outputs with probability at least 3/4 a list of O(1/ɛ) circuits such that at least one of the circuits on the list computes f on more than 1 − δ fraction of inputs, for δ = O((log 1/ɛ)/k); moreover, each output circuit is an AC 0 circuit (of size poly(n, k, log 1/δ, 1/ɛ)), with oracle access to the circuit C. Using the Goldreich-Levin decoding algorithm [GL89], we also get a fully uniform version of Yao’s XOR Lemma [Yao82] with optimal parameters, up to constant factors. Our results simplify and improve those in [IJK06]. Our main result may be viewed as an efficient approximate, local, list-decoding algorithm for
Chernoff-type Direct Product Theorems
- In Proceeding of the Twenty-Seventh Annual International Cryptology Conference (CRYPTO’07
, 2007
"... Abstract. Consider a challenge-response protocol where the probability of a correct response is at least α for a legitimate user, and at most β < α for an attacker. One example is a CAPTCHA challenge, where a human should have a significantly higher chance of answering a single challenge (e.g., u ..."
Abstract
-
Cited by 15 (5 self)
- Add to MetaCart
(Show Context)
Abstract. Consider a challenge-response protocol where the probability of a correct response is at least α for a legitimate user, and at most β < α for an attacker. One example is a CAPTCHA challenge, where a human should have a significantly higher chance of answering a single challenge (e.g., uncovering a distorted letter) than an attacker; another example is an argument system without perfect completeness. A natural approach to boost the gap between legitimate users and attackers is to issue many challenges, and accept if the response is correct for more than a threshold fraction, for the threshold chosen between α and β. We give the first proof that parallel repetition with thresholds improves the security of such protocols. We do this with a very general result about an attacker’s ability to solve a large fraction of many independent instances of a hard problem, showing a Chernoff-like convergence of the fraction solved incorrectly to the probability of failure for a single instance.
Verifying and decoding in constant depth
- In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing
, 2007
"... We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently intro ..."
Abstract
-
Cited by 15 (4 self)
- Add to MetaCart
(Show Context)
We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver’s computation to the (potentially malicious) sender. This idea was recently introduced by Goldwasser et al. [14] in the area of program checking. A classic example of such a sender-receiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (p-prover) interactive proofs with k rounds of interaction are equivalent to (p-prover) interactive proofs with k + O(1) rounds, where the verifier is in NC 0. That is, each round of the verifier’s computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant parallel-time. Another, less immediate sender-receiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list-)decodable by constant-depth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally list-decodable codes and average-case complexity, we obtain a new, more efficient, worst-case to average-case reduction for languages in EXP.
Constructive proofs of concentration bounds
- In Proceedings of the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and 14th International Workshop on Randomization and Computation (APPROX-RANDOM ’10
, 2010
"... We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
(Show Context)
We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addi-tion, our proof is constructive in the following sense: if the sum of the given random variables is not concentrated around the expectation, then we can efficiently find (with high probability) a subset of the random variables that are statistically dependent. As simple corollaries, we also get the concentration bounds for [0, 1]-valued random variables and Azuma’s inequality for martingales [Azu67]. We interpret the Chernoff-Hoeffding bound as a statement about Direct Product Theorems. Informally, a Direct Product Theorem says that the complexity of solving all k instances of a hard problem increases exponentially with k; a Threshold Direct Product Theorem says that it is exponentially hard in k to solve even a significant fraction of the given k instances of a hard problem. We show the equivalence between optimal Direct Product Theorems and optimal Threshold Direct Product Theorems. As an application of this connection, we get the Chernoff bound for expander walks [Gil98] from the (simpler to prove) hitting property [AKS87], as well as an optimal (in a certain range of parameters) Threshold Direct Product Theorem for weakly verifiable puzzles from the optimal Direct Product Theorem [CHS05]. We also get a simple constructive proof of Unger’s result [Ung09] saying that XOR Lemmas imply Threshold Direct
A parallel repetition theorem for any interactive argument
- ECCC, TR09-027 (Revision 1), Tech. Rep., 2009, eCCC, TR09-027, Revision 1
"... Abstract — The question of whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
Abstract — The question of whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive arguments (e.g., 3-message protocols — Bellare, Impagliazzo and Naor [FOCS ’97], and public-coin protocols — H˚astad, Pass, Pietrzak and Wikström [Manuscript ’08]), Bellare et al. gave an example of interactive arguments for which parallel repetition does not reduce the soundness error at all. We show that by slightly modifying any interactive argument, in a way that preserves its completeness and only slightly deteriorates its soundness, we get a protocol for which parallel repetition does reduce the error at a weak exponential rate. In this modified version,
Delegating computation reliably: Paradigms and Constructions
, 2009
"... In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for we ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We con-struct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's
Uniform hardness amplification in NP via monotone codes
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2006
"... We consider the problem of amplifying uniform average-case hardness of languages in NP, where hardness is with respect to BPP algorithms. We introduce the notion of monotone errorcorrecting codes, and show that hardness amplification for NP is essentially equivalent to constructing efficiently local ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
We consider the problem of amplifying uniform average-case hardness of languages in NP, where hardness is with respect to BPP algorithms. We introduce the notion of monotone errorcorrecting codes, and show that hardness amplification for NP is essentially equivalent to constructing efficiently locally encodable and locally list-decodable monotone codes. The previous hardness amplification results for NP [Tre03, Tre05] focused on giving a direct construction of some locally encodable/decodable monotone codes, running into the problem of large amounts of nonuniformity used by the decoding algorithm. In contrast, we propose the indirect approach to constructing locally encodable/decodable monotone codes, combining the uniform Direct Product Lemma of [IJK06] and arbitrary, not necessarily locally encodable, monotone codes. The latter codes have fewer restrictions, and so may be easier to construct. We study what parameters are achievable by monotone codes in general, giving negative and positive results. We present two constructions of monotone codes. Our first code is a uniquely decodable code based on the Majority function, and has an efficient decoding algorithm. Our second code is combinatorially list-decodable, but we do not have an efficient decoding algorithm. In conjunction with an appropriate Direct Product Lemma, our first code yields uniform hardness amplification for NP from inverse polynomial to constant average-case hardness. Our second code, even with a brute-force decoding algorithm, yields further hardness amplification to 1/2 −log −Ω(1) n. Together, these give an alternative proof of Trevisan’s result [Tre03, Tre05]. Getting any non-brute-force decoding algorithm for our second code would imply improved parameters for the problem of hardness amplification in NP.