Results 1 - 10
of
12
Parallel repetition of computationally sound protocols revisited
- IN 4TH TCC, SPRINGER, LECTURE NOTES IN COMPUTER SCIENCE
, 2007
"... Parallel repetition is well known to reduce the error probability at an exponential rate for single- and multi-prover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. inte ..."
Abstract
-
Cited by 20 (2 self)
- Add to MetaCart
(Show Context)
Parallel repetition is well known to reduce the error probability at an exponential rate for single- and multi-prover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. interactive arguments) if the protocol has at most three rounds. On the other hand, for four rounds they give a protocol where this is no longer the case: the error probability does not decrease below some constant even if the protocol is repeated a polynomial number of times. Unfortunately, this protocol is not very convincing as the communication complexity of each instance of the protocol grows linearly with the number of repetitions, and for such protocols the error does not even decrease for some types of interactive proofs. Noticing this, Bellare et al. construct (a quite artificial) oracle relative to which a four round protocol exists whose communication complexity does not depend on the number of parallel repetitions. This shows that there is no “black-box” error reduction theorem for four round protocols. In this paper we give the first computationally sound protocol where k-fold parallel repetition does not decrease the error probability below some constant for any polynomial k (and where the communication complexity does not depend on k). The protocol has eight rounds and uses the universal arguments of Barak and Goldreich (2001). We also give another four round protocol relative to an oracle, unlike the artificial oracle of Bellare et al., we just need a generic group. This group can then potentially be instantiated with some real group satisfying some well defined hardness assumptions (we do not know of any candidate for such a group at the moment). 1
Constructive proofs of concentration bounds
- In Proceedings of the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and 14th International Workshop on Randomization and Computation (APPROX-RANDOM ’10
, 2010
"... We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
(Show Context)
We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addi-tion, our proof is constructive in the following sense: if the sum of the given random variables is not concentrated around the expectation, then we can efficiently find (with high probability) a subset of the random variables that are statistically dependent. As simple corollaries, we also get the concentration bounds for [0, 1]-valued random variables and Azuma’s inequality for martingales [Azu67]. We interpret the Chernoff-Hoeffding bound as a statement about Direct Product Theorems. Informally, a Direct Product Theorem says that the complexity of solving all k instances of a hard problem increases exponentially with k; a Threshold Direct Product Theorem says that it is exponentially hard in k to solve even a significant fraction of the given k instances of a hard problem. We show the equivalence between optimal Direct Product Theorems and optimal Threshold Direct Product Theorems. As an application of this connection, we get the Chernoff bound for expander walks [Gil98] from the (simpler to prove) hitting property [AKS87], as well as an optimal (in a certain range of parameters) Threshold Direct Product Theorem for weakly verifiable puzzles from the optimal Direct Product Theorem [CHS05]. We also get a simple constructive proof of Unger’s result [Ung09] saying that XOR Lemmas imply Threshold Direct
An efficient parallel repetition theorem
"... We present a general parallel-repetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier’s messages, but not ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
We present a general parallel-repetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier’s messages, but not necessarily its decision to accept or reject, can be efficiently simulated with noticeable probability.
Tight Parallel Repetition Theorems for Public-coin Arguments
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 109
, 2009
"... Following Hastad et al. [HPPW08], we study parallel repetition theorems for public-coin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for public-coin interactive ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Following Hastad et al. [HPPW08], we study parallel repetition theorems for public-coin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for public-coin interactive arguments. That is, n-fold parallel repetition reduces the soundness error from δ to δ n. The crux of our improvement is a new analysis that avoid using Raz’s Sampling Lemma, which is the key to the previous results. 2. We give a new reduction to strengthen the direct product theorem of Hastad et al. for arguments with extendable and simulatable verifiers. We show that n-fold parallel repetition reduces the soundness error from δ to δ n/2, which is almost tight. In particular, we remove the dependency on the number of rounds in the bound, and as a consequence, extend the “concurrent ” repetition theorem of Wikström [Wik09] to this model. 3. We give a simple and generic reduction which shows that tight direct product theorems imply almost-tight Chernoff-type theorems. The reduction extends our results to Chernoff-type theorems, and gives an alternative proof to the Chernoff-type theorem of Impagliazzo et al. [IJK07] for weakly-verifiable puzzles. 4. As an additional contribution, we observe that the reduction of Pass and Venkitasubramaniam [PV07] for constant-round public-coin arguments gives tight parallel repetition theorems for threshold verifiers, who accept when more than a certain number of repetition accepts.
Counterexamples to Hardness Amplification Beyond Negligible
, 2012
"... If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated on ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifieshardnessisoftenhighlynon-trivial,andinsomecasesmaybefalse. Forexample, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions/relations, weakly-verifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability> 1, then the direct product provably amplifies hardness to some negligible probability. 2 Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function ε(n), we instantiate these primitives so that the direct product can always be broken with probability ε(n), no matter how many copies we take. 1
An efficient concurrent repetition theorem
, 2009
"... H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a non-constant number of rounds, for a natural generalization of public-coin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifi ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a non-constant number of rounds, for a natural generalization of public-coin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifiers of a k-wise repeated m-message verifier can be turned into a prover with error probability 1 − γ − O(m √ − log (ɛ) /k). This improves previous results of Impagliazzo et al. (Crypto 2007) and Pass and Venkitasubramaniam (STOC 2007) that studies the constant round case. We prove a generalization of Raz’s Lemma to random processes that allows us to improve the analysis of the reduction of H˚astad et al. in the public-coin case to 1 − γ − O ( √ − log (ɛ) /k), i.e., we remove the dependence on the number rounds completely, and thus the restriction to settings where k> m2. An important implication of the strengthened parallel repetition theorem is the first efficient concurrent repetition theorem for protocols with a non-constant number of rounds. In concurrent repetition, the verifiers execute completely independently and only report their final decision, i.e., the prover chooses arbitrarily in which order it interacts with the individual verifiers. This should be contrasted with parallel repetition where the verifiers are synchronized in each round. 1
Succinct arguments from . . .
, 2012
"... Succinct arguments of knowledge are computationally-sound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techn ..."
Abstract
- Add to MetaCart
Succinct arguments of knowledge are computationally-sound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs Ω(t) space in order to run in quasilinear time (i.e., time t · poly(k)), regardless of the space complexity s of the machine M. We say that a succinct argument is complexity preserving if the prover runs in time t · poly(k) and space s · poly(k) and the verifier runs in time |x | · poly(k) when proving and verifying that a t-time s-space random-access machine nondeterministically accepts an input x. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time t · polylog(t) and space s · polylog(t) and the verifier runs in time |x | · polylog(t). (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with
The Knowledge Tightness of Parallel Zero-Knowledge
"... Abstract. We investigate the concrete security of black-box zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. We investigate the concrete security of black-box zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge tightness): the number of queries made by black-box simulators when zero-knowledge protocols are composed in parallel. As a function of the number of parallel sessions, k, and the round complexity of the protocol, m, the bound is roughly k 1/m. We also construct a modular procedure to amplify simulator-query lower bounds (as above), to generic lower bounds in the black-box concurrent zero-knowledge setting. As a demonstration of our techniques, we give a self-contained proof of the o(log n / log log n) lower bound for the round complexity of black-box concurrent zero-knowledge protocols, first shown by Canetti, Kilian, Petrank and Rosen (STOC 2002). Additionally, we give a new lower bound regarding constant-round black-box concurrent zero-knowledge protocols: the running time of the black-box simulator must be at least n Ω(log n).