Results 1  10
of
12
Parallel repetition of computationally sound protocols revisited
 IN 4TH TCC, SPRINGER, LECTURE NOTES IN COMPUTER SCIENCE
, 2007
"... Parallel repetition is well known to reduce the error probability at an exponential rate for single and multiprover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. inte ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Parallel repetition is well known to reduce the error probability at an exponential rate for single and multiprover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. interactive arguments) if the protocol has at most three rounds. On the other hand, for four rounds they give a protocol where this is no longer the case: the error probability does not decrease below some constant even if the protocol is repeated a polynomial number of times. Unfortunately, this protocol is not very convincing as the communication complexity of each instance of the protocol grows linearly with the number of repetitions, and for such protocols the error does not even decrease for some types of interactive proofs. Noticing this, Bellare et al. construct (a quite artificial) oracle relative to which a four round protocol exists whose communication complexity does not depend on the number of parallel repetitions. This shows that there is no “blackbox” error reduction theorem for four round protocols. In this paper we give the first computationally sound protocol where kfold parallel repetition does not decrease the error probability below some constant for any polynomial k (and where the communication complexity does not depend on k). The protocol has eight rounds and uses the universal arguments of Barak and Goldreich (2001). We also give another four round protocol relative to an oracle, unlike the artificial oracle of Bellare et al., we just need a generic group. This group can then potentially be instantiated with some real group satisfying some well defined hardness assumptions (we do not know of any candidate for such a group at the moment). 1
Constructive proofs of concentration bounds
 In Proceedings of the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and 14th International Workshop on Randomization and Computation (APPROXRANDOM ’10
, 2010
"... We give a simple combinatorial proof of the ChernoffHoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}valued random variables is highly concentrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
We give a simple combinatorial proof of the ChernoffHoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}valued random variables is highly concentrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addition, our proof is constructive in the following sense: if the sum of the given random variables is not concentrated around the expectation, then we can efficiently find (with high probability) a subset of the random variables that are statistically dependent. As simple corollaries, we also get the concentration bounds for [0, 1]valued random variables and Azuma’s inequality for martingales [Azu67]. We interpret the ChernoffHoeffding bound as a statement about Direct Product Theorems. Informally, a Direct Product Theorem says that the complexity of solving all k instances of a hard problem increases exponentially with k; a Threshold Direct Product Theorem says that it is exponentially hard in k to solve even a significant fraction of the given k instances of a hard problem. We show the equivalence between optimal Direct Product Theorems and optimal Threshold Direct Product Theorems. As an application of this connection, we get the Chernoff bound for expander walks [Gil98] from the (simpler to prove) hitting property [AKS87], as well as an optimal (in a certain range of parameters) Threshold Direct Product Theorem for weakly verifiable puzzles from the optimal Direct Product Theorem [CHS05]. We also get a simple constructive proof of Unger’s result [Ung09] saying that XOR Lemmas imply Threshold Direct
An efficient parallel repetition theorem
"... We present a general parallelrepetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any publiccoin argument, and more generally, any argument where the verifier’s messages, but not ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We present a general parallelrepetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any publiccoin argument, and more generally, any argument where the verifier’s messages, but not necessarily its decision to accept or reject, can be efficiently simulated with noticeable probability.
Tight Parallel Repetition Theorems for Publiccoin Arguments
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 109
, 2009
"... Following Hastad et al. [HPPW08], we study parallel repetition theorems for publiccoin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for publiccoin interactive ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Following Hastad et al. [HPPW08], we study parallel repetition theorems for publiccoin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for publiccoin interactive arguments. That is, nfold parallel repetition reduces the soundness error from δ to δ n. The crux of our improvement is a new analysis that avoid using Raz’s Sampling Lemma, which is the key to the previous results. 2. We give a new reduction to strengthen the direct product theorem of Hastad et al. for arguments with extendable and simulatable verifiers. We show that nfold parallel repetition reduces the soundness error from δ to δ n/2, which is almost tight. In particular, we remove the dependency on the number of rounds in the bound, and as a consequence, extend the “concurrent ” repetition theorem of Wikström [Wik09] to this model. 3. We give a simple and generic reduction which shows that tight direct product theorems imply almosttight Chernofftype theorems. The reduction extends our results to Chernofftype theorems, and gives an alternative proof to the Chernofftype theorem of Impagliazzo et al. [IJK07] for weaklyverifiable puzzles. 4. As an additional contribution, we observe that the reduction of Pass and Venkitasubramaniam [PV07] for constantround publiccoin arguments gives tight parallel repetition theorems for threshold verifiers, who accept when more than a certain number of repetition accepts.
Counterexamples to Hardness Amplification Beyond Negligible
, 2012
"... If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated on ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifieshardnessisoftenhighlynontrivial,andinsomecasesmaybefalse. Forexample, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as oneway functions/relations, weaklyverifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak oneway function that no polytime attacker can break with probability> 1, then the direct product provably amplifies hardness to some negligible probability. 2 Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via blackbox reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and oneway functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function ε(n), we instantiate these primitives so that the direct product can always be broken with probability ε(n), no matter how many copies we take. 1
An efficient concurrent repetition theorem
, 2009
"... H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a nonconstant number of rounds, for a natural generalization of publiccoin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a nonconstant number of rounds, for a natural generalization of publiccoin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifiers of a kwise repeated mmessage verifier can be turned into a prover with error probability 1 − γ − O(m √ − log (ɛ) /k). This improves previous results of Impagliazzo et al. (Crypto 2007) and Pass and Venkitasubramaniam (STOC 2007) that studies the constant round case. We prove a generalization of Raz’s Lemma to random processes that allows us to improve the analysis of the reduction of H˚astad et al. in the publiccoin case to 1 − γ − O ( √ − log (ɛ) /k), i.e., we remove the dependence on the number rounds completely, and thus the restriction to settings where k> m2. An important implication of the strengthened parallel repetition theorem is the first efficient concurrent repetition theorem for protocols with a nonconstant number of rounds. In concurrent repetition, the verifiers execute completely independently and only report their final decision, i.e., the prover chooses arbitrarily in which order it interacts with the individual verifiers. This should be contrasted with parallel repetition where the verifiers are synchronized in each round. 1
Succinct arguments from . . .
, 2012
"... Succinct arguments of knowledge are computationallysound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techn ..."
Abstract
 Add to MetaCart
Succinct arguments of knowledge are computationallysound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilisticallycheckable proofs (PCPs). Yet, even when instantiating these constructions with stateoftheart PCPs, the prover needs Ω(t) space in order to run in quasilinear time (i.e., time t · poly(k)), regardless of the space complexity s of the machine M. We say that a succinct argument is complexity preserving if the prover runs in time t · poly(k) and space s · poly(k) and the verifier runs in time x  · poly(k) when proving and verifying that a ttime sspace randomaccess machine nondeterministically accepts an input x. Do complexitypreserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multiprover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a oneround succinct MIP of knowledge, where each prover runs in time t · polylog(t) and space s · polylog(t) and the verifier runs in time x  · polylog(t). (2) We show how to transform any oneround MIP protocol to a succinct fourmessage argument (with
The Knowledge Tightness of Parallel ZeroKnowledge
"... Abstract. We investigate the concrete security of blackbox zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We investigate the concrete security of blackbox zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge tightness): the number of queries made by blackbox simulators when zeroknowledge protocols are composed in parallel. As a function of the number of parallel sessions, k, and the round complexity of the protocol, m, the bound is roughly k 1/m. We also construct a modular procedure to amplify simulatorquery lower bounds (as above), to generic lower bounds in the blackbox concurrent zeroknowledge setting. As a demonstration of our techniques, we give a selfcontained proof of the o(log n / log log n) lower bound for the round complexity of blackbox concurrent zeroknowledge protocols, first shown by Canetti, Kilian, Petrank and Rosen (STOC 2002). Additionally, we give a new lower bound regarding constantround blackbox concurrent zeroknowledge protocols: the running time of the blackbox simulator must be at least n Ω(log n).