• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

A parallel repetition theorem for any interactive argument (2009)

by Iftach Haitner
Venue:In FOCS
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 12
Next 10 →

Parallel repetition of computationally sound protocols revisited

by Krzysztof Pietrzak, Douglas Wikström - IN 4TH TCC, SPRINGER, LECTURE NOTES IN COMPUTER SCIENCE , 2007
"... Parallel repetition is well known to reduce the error probability at an exponential rate for single- and multi-prover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. inte ..."
Abstract - Cited by 20 (2 self) - Add to MetaCart
Parallel repetition is well known to reduce the error probability at an exponential rate for single- and multi-prover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. interactive arguments) if the protocol has at most three rounds. On the other hand, for four rounds they give a protocol where this is no longer the case: the error probability does not decrease below some constant even if the protocol is repeated a polynomial number of times. Unfortunately, this protocol is not very convincing as the communication complexity of each instance of the protocol grows linearly with the number of repetitions, and for such protocols the error does not even decrease for some types of interactive proofs. Noticing this, Bellare et al. construct (a quite artificial) oracle relative to which a four round protocol exists whose communication complexity does not depend on the number of parallel repetitions. This shows that there is no “black-box” error reduction theorem for four round protocols. In this paper we give the first computationally sound protocol where k-fold parallel repetition does not decrease the error probability below some constant for any polynomial k (and where the communication complexity does not depend on k). The protocol has eight rounds and uses the universal arguments of Barak and Goldreich (2001). We also give another four round protocol relative to an oracle, unlike the artificial oracle of Bellare et al., we just need a generic group. This group can then potentially be instantiated with some real group satisfying some well defined hardness assumptions (we do not know of any candidate for such a group at the moment). 1
(Show Context)

Citation Context

...of the repetitions, where is the soundness of a single execution. Canetti et al. [4] give a quantitatively much better reduction for two-round protocols. Pass and Venkitasubramaniam prove a parallel-repetition theorem for constant-round public-coin protocols. Håstad et al. [15] give a parallel repetition for a class of computationally sound protocols which as important special cases includes (not necessarily constant-round) public-coin protocols2 and three-round protocols. Chung and Liu [5] improve upon [15] and prove a tight bound: k-fold repetition reduces the error from δ to δk . Haitner [14] considers a different way of doing parallel repetition. He first changes the verifier in protocol at hand: In each round, the verifier can (with some noticeable probability) terminate and accept. Although this increases the error probability of the protocol, Haitner shows that parallel repetition of this modified protocol always reduces the soundness error at an exponential rate. Verifiers with a Secret Usually, the verifier in an interactive protocol is not supposed to hold any secret information, and so its strategy is efficiently computable. Bellare et al. [2] observe that when considering...

Constructive proofs of concentration bounds

by Russell Impagliazzo, Valentine Kabanets - In Proceedings of the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and 14th International Workshop on Randomization and Computation (APPROX-RANDOM ’10 , 2010
"... We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, ..."
Abstract - Cited by 14 (0 self) - Add to MetaCart
We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly con-centrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addi-tion, our proof is constructive in the following sense: if the sum of the given random variables is not concentrated around the expectation, then we can efficiently find (with high probability) a subset of the random variables that are statistically dependent. As simple corollaries, we also get the concentration bounds for [0, 1]-valued random variables and Azuma’s inequality for martingales [Azu67]. We interpret the Chernoff-Hoeffding bound as a statement about Direct Product Theorems. Informally, a Direct Product Theorem says that the complexity of solving all k instances of a hard problem increases exponentially with k; a Threshold Direct Product Theorem says that it is exponentially hard in k to solve even a significant fraction of the given k instances of a hard problem. We show the equivalence between optimal Direct Product Theorems and optimal Threshold Direct Product Theorems. As an application of this connection, we get the Chernoff bound for expander walks [Gil98] from the (simpler to prove) hitting property [AKS87], as well as an optimal (in a certain range of parameters) Threshold Direct Product Theorem for weakly verifiable puzzles from the optimal Direct Product Theorem [CHS05]. We also get a simple constructive proof of Unger’s result [Ung09] saying that XOR Lemmas imply Threshold Direct
(Show Context)

Citation Context

...et some version of a TDPT for 2-prover games, using the best available DPT for such games [Raz98, Hol07, Rao08];2 however, a better TDPT for 2-prover games is known [Rao08]. Also, as shown by Haitner =-=[Hai09]-=-, for a wide class of cryptographic protocols (interactive arguments), even if the original protocol doesn’t satisfy any DPT, there is a slight modification of the protocol satisfying some weak DPT. T...

An efficient parallel repetition theorem

by Johan Håstad, Rafael Pass, Douglas Wikström, et al.
"... We present a general parallel-repetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier’s messages, but not ..."
Abstract - Cited by 4 (2 self) - Add to MetaCart
We present a general parallel-repetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier’s messages, but not necessarily its decision to accept or reject, can be efficiently simulated with noticeable probability.

Tight Parallel Repetition Theorems for Public-coin Arguments

by Kai-min Chung, Feng-hao Liu - ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 109 , 2009
"... Following Hastad et al. [HPPW08], we study parallel repetition theorems for public-coin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for public-coin interactive ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Following Hastad et al. [HPPW08], we study parallel repetition theorems for public-coin interactive arguments and their generalizations. We obtain the following results: 1. We show that the reduction of Hastad et al. [HPPW08] actually gives a tight direct product theorem for public-coin interactive arguments. That is, n-fold parallel repetition reduces the soundness error from δ to δ n. The crux of our improvement is a new analysis that avoid using Raz’s Sampling Lemma, which is the key to the previous results. 2. We give a new reduction to strengthen the direct product theorem of Hastad et al. for arguments with extendable and simulatable verifiers. We show that n-fold parallel repetition reduces the soundness error from δ to δ n/2, which is almost tight. In particular, we remove the dependency on the number of rounds in the bound, and as a consequence, extend the “concurrent ” repetition theorem of Wikström [Wik09] to this model. 3. We give a simple and generic reduction which shows that tight direct product theorems imply almost-tight Chernoff-type theorems. The reduction extends our results to Chernoff-type theorems, and gives an alternative proof to the Chernoff-type theorem of Impagliazzo et al. [IJK07] for weakly-verifiable puzzles. 4. As an additional contribution, we observe that the reduction of Pass and Venkitasubramaniam [PV07] for constant-round public-coin arguments gives tight parallel repetition theorems for threshold verifiers, who accept when more than a certain number of repetition accepts.
(Show Context)

Citation Context

...tisfying a “computational” simulatability property and demonstrate that parallel repetition reduces the soundness error at a nearly optimal rate also for such protocols. { The elegant work of Haitner =-=[Hai09]-=- considers a certain class of protocols with “random-terminating” verifiers and demonstrates that parallel repetition reduces the soundness error at an exponential rate for such protocols; randomtermi...

Counterexamples to Hardness Amplification Beyond Negligible

by Yevgeniy Dodis, Abhishek Jain, Tal Moran, Daniel Wichs , 2012
"... If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated on ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to hardness amplification is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifieshardnessisoftenhighlynon-trivial,andinsomecasesmaybefalse. Forexample, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions/relations, weakly-verifiable puzzles, and signatures. Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability> 1, then the direct product provably amplifies hardness to some negligible probability. 2 Naturally, we would expect that we can amplify hardness exponentially, all the way to 2−n probability, or at least to some fixed/known negligible such as n−logn in the security parameter n, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are false by providing simple but surprising counterexamples. In particular, we construct weakly secure signatures and one-way functions, for which standard hardness amplification results are known to hold, but for which hardness does not amplify beyond just negligible. That is, for any negligible function ε(n), we instantiate these primitives so that the direct product can always be broken with probability ε(n), no matter how many copies we take. 1

An efficient concurrent repetition theorem

by Douglas Wikström , 2009
"... H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a non-constant number of rounds, for a natural generalization of public-coin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifi ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
H˚astad et al. (2008) prove, using Raz’s lemma (STOC ’95) the first efficient parallel repetition theorem for protocols with a non-constant number of rounds, for a natural generalization of public-coin protocols. They show that a parallel prover that convinces a fraction 1 − γ of the embedded verifiers of a k-wise repeated m-message verifier can be turned into a prover with error probability 1 − γ − O(m √ − log (ɛ) /k). This improves previous results of Impagliazzo et al. (Crypto 2007) and Pass and Venkitasubramaniam (STOC 2007) that studies the constant round case. We prove a generalization of Raz’s Lemma to random processes that allows us to improve the analysis of the reduction of H˚astad et al. in the public-coin case to 1 − γ − O ( √ − log (ɛ) /k), i.e., we remove the dependence on the number rounds completely, and thus the restriction to settings where k> m2. An important implication of the strengthened parallel repetition theorem is the first efficient concurrent repetition theorem for protocols with a non-constant number of rounds. In concurrent repetition, the verifiers execute completely independently and only report their final decision, i.e., the prover chooses arbitrarily in which order it interacts with the individual verifiers. This should be contrasted with parallel repetition where the verifiers are synchronized in each round. 1

General Hardness Amplification of Predicates and Puzzles

by Thomas Holenstein, et al. , 2010
"... ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract not found

Succinct arguments from . . .

by Nir Bitansky, Alessandro Chiesa , 2012
"... Succinct arguments of knowledge are computationally-sound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techn ..."
Abstract - Add to MetaCart
Succinct arguments of knowledge are computationally-sound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs Ω(t) space in order to run in quasilinear time (i.e., time t · poly(k)), regardless of the space complexity s of the machine M. We say that a succinct argument is complexity preserving if the prover runs in time t · poly(k) and space s · poly(k) and the verifier runs in time |x | · poly(k) when proving and verifying that a t-time s-space random-access machine nondeterministically accepts an input x. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques: (1) We construct a one-round succinct MIP of knowledge, where each prover runs in time t · polylog(t) and space s · polylog(t) and the verifier runs in time |x | · polylog(t). (2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with

Academia Sinica

by Kai-min Chung, Rafail Ostrovsky, Rafael Pass, Ivan Visconti
"... Abstract—Resettable-security, introduced by Canetti, ..."
Abstract - Add to MetaCart
Abstract—Resettable-security, introduced by Canetti,

The Knowledge Tightness of Parallel Zero-Knowledge

by Kai-min Chung, Rafael Pass, Wei-lung Dustin Tseng
"... Abstract. We investigate the concrete security of black-box zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge ..."
Abstract - Add to MetaCart
Abstract. We investigate the concrete security of black-box zeroknowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge tightness): the number of queries made by black-box simulators when zero-knowledge protocols are composed in parallel. As a function of the number of parallel sessions, k, and the round complexity of the protocol, m, the bound is roughly k 1/m. We also construct a modular procedure to amplify simulator-query lower bounds (as above), to generic lower bounds in the black-box concurrent zero-knowledge setting. As a demonstration of our techniques, we give a self-contained proof of the o(log n / log log n) lower bound for the round complexity of black-box concurrent zero-knowledge protocols, first shown by Canetti, Kilian, Petrank and Rosen (STOC 2002). Additionally, we give a new lower bound regarding constant-round black-box concurrent zero-knowledge protocols: the running time of the black-box simulator must be at least n Ω(log n).
(Show Context)

Citation Context

... , V k∗ ) is still complete. It remains to show that V k∗ is k∗ V “sound” against the rewinding S; that is, on input x /∈ L, S is unlikely to 2 The term “random termination” was first used by Haitner =-=[Hai09]-=-, but the random termination verifier we considered already appeared in the earlier work of [CKPR01]. 3 We use a well-known technique (see for example [GK96b,CKPR01]) to generate fresh independent ran...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University