Results 1  10
of
17
Lower bounds for nonblackbox zero knowledge
 In 44th FOCS
, 2003
"... We show new lower bounds and impossibility results for general (possibly nonblackbox) zeroknowledge proofs and arguments. Our main results are that, under reasonable complexity assumptions: 1. There does not exist a tworound zeroknowledge proof system with perfect completeness for an NPcomplet ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
We show new lower bounds and impossibility results for general (possibly nonblackbox) zeroknowledge proofs and arguments. Our main results are that, under reasonable complexity assumptions: 1. There does not exist a tworound zeroknowledge proof system with perfect completeness for an NPcomplete language. The previous impossibility result for tworound zero knowledge, by Goldreich and Oren (J. Cryptology, 1994) was only for the case of auxiliaryinput zeroknowledge proofs and arguments. 2. There does not exist a constantround zeroknowledge strong proof or argument of knowledge (as defined by Goldreich (2001)) for a nontrivial language. 3. There does not exist a constantround publiccoin proof system for a nontrivial language that is resettable zero knowledge. This result also extends to boundedresettable zero knowledge, in which the number of resets is a priori bounded by a polynomial in the input length and provertoverifier communication.
An unconditional study of computational zero knowledge
 SIAM Journal on Computing
, 2004
"... We prove a number of general theorems about ZK, the class of problems possessing (computational) zeroknowledge proofs. Our results are unconditional, in contrast to most previous works on ZK, which rely on the assumption that oneway functions exist. We establish several new characterizations of ZK ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
We prove a number of general theorems about ZK, the class of problems possessing (computational) zeroknowledge proofs. Our results are unconditional, in contrast to most previous works on ZK, which rely on the assumption that oneway functions exist. We establish several new characterizations of ZK, and use these characterizations to prove results such as: 1. Honestverifier ZK equals general ZK. 2. Publiccoin ZK equals privatecoin ZK. 3. ZK is closed under union. 4. ZK with imperfect completeness equals ZK with perfect completeness. 5. Any problem in ZK ∩ NP can be proven in computational zero knowledge by a BPP NP prover. 6. ZK with blackbox simulators equals ZK with general, nonblackbox simulators. The above equalities refer to the resulting class of problems (and do not necessarily preserve other efficiency measures such as round complexity). Our approach is to combine the conditional techniques previously used in the study of ZK with the unconditional techniques developed in the study of SZK, the class of problems possessing statistical zeroknowledge proofs. To enable this combination, we prove that every problem in ZK can be decomposed into a problem in SZK together with a set of instances from which a oneway function can be constructed.
Random walks on combinatorial objects
 Surveys in Combinatorics 1999
, 1999
"... Summary Approximate sampling from combinatoriallydefined sets, using the Markov chain Monte Carlo method, is discussed from the perspective of combinatorial algorithms. We also examine the associated problem of discrete integration over such sets. Recent work is reviewed, and we reexamine the unde ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Summary Approximate sampling from combinatoriallydefined sets, using the Markov chain Monte Carlo method, is discussed from the perspective of combinatorial algorithms. We also examine the associated problem of discrete integration over such sets. Recent work is reviewed, and we reexamine the underlying formal foundational framework in the light of this. We give a detailed treatment of the coupling technique, a classical method for analysing the convergence rates of Markov chains. The related topic of perfect sampling is examined. In perfect sampling, the goal is to sample exactly from the target set. We conclude with a discussion of negative results in this area. These are results which imply that there are no polynomial time algorithms of a particular type for a particular problem. 1
Pseudorandomness for approximate counting and sampling
 In Proceedings of the 20th IEEE Conference on Computational Complexity
, 2005
"... We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allow ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption. One special case is a proof that EXP � ⊆ NP/poly ⇒ EXP � ⊆ P NP   /poly. In words, if there is a problem in EXP that cannot be computed by polysize nondeterministic circuits then there is one which cannot be computed by polysize circuits that make nonadaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize ArthurMerlin games (i.e., show AM = NP) are in fact all equivalent. In addition to simplifying the framework of AM derandomization, we show that this “unified assumption ” suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NPwitnesses. We use the “boosting ” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use nonadaptive NPqueries and perform the following tasks: • approximate counting of NPwitnesses: given a Boolean circuit A, output r such that (1 − ɛ)A −1 (1)  ≤r ≤A −1 (1).
On the complexity of interactive proofs with bounded communication
 Information Processing Letters
, 1998
"... We investigate the computational complexity of languages which haveinteractive proof systems of bounded message complexity. In particular, denoting the length of the input by n, we show that If L has an interactive proof in which the total communication is bounded by c(n) bits then L can be recogniz ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We investigate the computational complexity of languages which haveinteractive proof systems of bounded message complexity. In particular, denoting the length of the input by n, we show that If L has an interactive proof in which the total communication is bounded by c(n) bits then L can be recognized by a probabilistic machine in time exponential in O(c(n)+log(n)). If L has a publiccoin interactive proof in which the prover sends c(n) bits then L can be recognized by a probabilistic machine in time exponential in O(c(n) log(c(n)) + log(n)). If L has an interactive proof in which the prover sends c(n) bits then L can be recognized by a probabilistic machine with an NPoracle in time exponential in O(c(n) log(c(n)) + log(n)). Work done while being on a sabbatical leave at LCS, MIT. 0 1
How to get more mileage from randomness extractors
, 2007
"... Let C be a class of distributions over {0, 1}^n. A deterministic randomness extractor for C isa function E: {0, 1}n! {0, 1}m such that for any X in C the distribution E(X) is statisticallyclose to the uniform distribution. A long line of research deals with explicit constructions of such extractors ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Let C be a class of distributions over {0, 1}^n. A deterministic randomness extractor for C isa function E: {0, 1}n! {0, 1}m such that for any X in C the distribution E(X) is statisticallyclose to the uniform distribution. A long line of research deals with explicit constructions of such extractors for various classes C while trying to maximize m.In this paper we give a general transformation that transforms a deterministic extractor Ethat extracts "few " bits into an extractor E0 that extracts "almost all the bits present in the source distribution". More precisely, we prove a general theorem saying that if E and C satisfycertain properties, then we can transform E into an extractor E0.Our methods build on (and generalize) a technique of Gabizon, Raz and Shaltiel (FOCS 2004) that present such a transformation for the very restricted class C of "oblivious bitfixing sources". The high level idea is to find properties of E and C which allow "recycling " the outputof E so that it can be "reused " to operate on the source distribution. An obvious obstacle is that the output of E is correlated with the source distribution.Using our transformation we give an explicit construction of a twosource extractor E:{0, 1}n * {0, 1}n! {0, 1}m such that for every two independent distributions X1 and X2 over{ 0, 1}n with minentropy at least k = (1/2 + ffi)n and ffl < = 2 log 4 n, E(X 1, X2) is fflclose to the uniform distribution on m = 2k Cffi log(1/ffl) bits. This result is optimal except for the preciseconstant Cffi and improves previous results by Chor and Goldreich (SICOMP 1988), Vazirani(Combinatorica 1987) and Dodis et al. (RANDOM 2004).
On the complexity of succinct zerosum games
 IEEE Conference on Computational Complexity
, 2005
"... We study the complexity of solving succinct zerosum games, i.e., the games whose payoff matrix M is given implicitly by a Boolean circuit C such that M(i, j) = C(i, j). We complement the known EXPhardness of computing the exact value of a succinct zerosum game by several results on approximating ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We study the complexity of solving succinct zerosum games, i.e., the games whose payoff matrix M is given implicitly by a Boolean circuit C such that M(i, j) = C(i, j). We complement the known EXPhardness of computing the exact value of a succinct zerosum game by several results on approximating the value. (1) We prove that approximating the value of a succinct zerosum game to within an additive factor is complete for the class promiseS p 2, the. To the best of our knowledge, it is “promise ” version of S p 2 the first natural problem shown complete for this class. (2) We describe a ZPP NP algorithm for constructing approximately optimal strategies, and hence for approximating the value, of a given succinct zerosum game. As a corollary, we obtain, in a uniform fashion, several complexitytheoretic results, e.g., a ZPP NP algorithm for learning circuits for SAT [7] and a recent result by Cai [9] that S p 2 ⊆ ZPPNP. (3) We observe that approximating the value of a succinct zerosum game to within a multiplicative factor is in PSPACE, and that it cannot be in promiseS p 2 unless the polynomialtime hierarchy collapses. Thus, under a reasonable complexitytheoretic assumption, multiplicativefactor approximation of succinct zerosum games is strictly harder
Lower Bounds on Signatures From Symmetric Primitives
, 2008
"... We show that every construction of onetime signature schemes from a random oracle achieves blackbox security at most 2 (1+o(1))q, where q is the total number of oracle queries asked by the key generation, signing, and verification algorithms. That is, any such scheme can be broken with probability ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We show that every construction of onetime signature schemes from a random oracle achieves blackbox security at most 2 (1+o(1))q, where q is the total number of oracle queries asked by the key generation, signing, and verification algorithms. That is, any such scheme can be broken with probability close to 1 by a (computationally unbounded) adversary making 2 (1+o(1))q queries to the oracle. This is tight up to a constant factor in the number of queries, since a simple modification of Lamport’s onetime signatures (Lamport ’79) achieves 2 (0.812−o(1))q blackbox security using q queries to the oracle. Our result extends (with a loss of a constant factor in the number of queries) also to the random permutation and idealcipher oracles. Since the symmetric primitives (e.g. block ciphers, hash functions, and message authentication codes) can be constructed by a constant number of queries to the mentioned oracles, as corollary we get lower bounds on the efficiency of signature schemes from symmetric primitives when the construction is blackbox. This can be taken as evidence of an inherent efficiency gap between signature schemes and symmetric primitives. 1
A scalable and nearly uniform generator of SAT witnesses
 In Proc. of CAV
, 2013
"... Abstract. Functional verification constitutes one of the most challenging tasks in the development of modern hardware systems, and simulationbased verification techniques dominate the functional verification landscape. A dominant paradigm in simulationbased verification is directed random testing, ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Functional verification constitutes one of the most challenging tasks in the development of modern hardware systems, and simulationbased verification techniques dominate the functional verification landscape. A dominant paradigm in simulationbased verification is directed random testing, where a model of the system is simulated with a set of random test stimuli that are uniformly or nearuniformly distributed over the space of all stimuli satisfying a given set of constraints. Uniform or nearuniform generation of solutions for large constraint sets is therefore a problem of theoretical and practical interest. For boolean constraints, prior work offered heuristic approaches with no guarantee of performance, and theoretical approaches with proven guarantees, but poor performance in practice. We offer here a new approach with theoretical performance guarantees and demonstrate its practical utility on large constraint sets. 1
Texts in Computational Complexity: Counting Problems
, 2006
"... We now turn to a new type of computational problems, which vastly generalize decision problems of the NPtype. We refer to counting problems, and more specifically to counting objects that can be efficiently recognized. The two formulations of NP provide a suitable definition of such objects and yie ..."
Abstract
 Add to MetaCart
We now turn to a new type of computational problems, which vastly generalize decision problems of the NPtype. We refer to counting problems, and more specifically to counting objects that can be efficiently recognized. The two formulations of NP provide a suitable definition of such objects and yield corresponding counting problems: 1. Counting the number of solutions for a given instance of a search problem (of a relation) R ` f0; 1g\Lambda \Theta f0; 1g\Lambda having efficiently checkable solutions (i.e., R 2 PC).1 That is, on input x, we are required to output jfy: (x; y) 2 Rgj. 2. Counting the number of NPwitnesses (with respect to a specific verification procedure V) for a given instance of an NPset S (i.e., S 2 N P and V is the corresponding verification procedure). That is, on input x, we are required to output jfy: V (x; y) = 1gj.