Results 1  10
of
15
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 829 (39 self)
 Add to MetaCart
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof " with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [6] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige, Goldwasser, Lovász, Safra and Szegedy [42], and Arora and Safra [6] and shows that there exists a positive ɛ such that approximating the maximum clique size in an Nvertex graph to within a factor of N ɛ is NPhard.
Definitions And Properties Of ZeroKnowledge Proof Systems
 Journal of Cryptology
, 1994
"... In this paper we investigate some properties of zeroknowledge proofs, a notion introduced by Goldwasser, Micali and Rackoff. We introduce and classify two definitions of zeroknowledge: auxiliary \Gamma input zeroknowledge and blackbox \Gamma simulation zeroknowledge. We explain why auxiliaryinp ..."
Abstract

Cited by 134 (10 self)
 Add to MetaCart
(Show Context)
In this paper we investigate some properties of zeroknowledge proofs, a notion introduced by Goldwasser, Micali and Rackoff. We introduce and classify two definitions of zeroknowledge: auxiliary \Gamma input zeroknowledge and blackbox \Gamma simulation zeroknowledge. We explain why auxiliaryinput zeroknowledge is a definition more suitable for cryptographic applications than the original [GMR1] definition. In particular, we show that any protocol solely composed of subprotocols which are auxiliaryinput zeroknowledge is itself auxiliaryinput zeroknowledge. We show that blackboxsimulation zeroknowledge implies auxiliaryinput zeroknowledge (which in turn implies the [GMR1] definition). We argue that all known zeroknowledge proofs are in fact blackboxsimulation zeroknowledge (i.e., were proved zeroknowledge using blackboxsimulation of the verifier). As a result, all known zeroknowledge proof systems are shown to be auxiliaryinput zeroknowledge and can be used for cryptographic applications such as those in [GMW2]. We demonstrate the triviality of certain classes of zeroknowledge proof systems, in the sense that only languages in BPP have zeroknowledge proofs of these classes. In particular, we show that any language having a Las Vegas zeroknowledge proof system necessarily belongs to RP . We show that randomness of both the verifier and the prover, and nontriviality of the interaction are essential properties of (nontrivial) auxiliaryinput zeroknowledge proofs.
Hardness Of Approximations
, 1996
"... This chapter is a selfcontained survey of recent results about the hardness of approximating NPhard optimization problems. ..."
Abstract

Cited by 120 (4 self)
 Add to MetaCart
This chapter is a selfcontained survey of recent results about the hardness of approximating NPhard optimization problems.
Parallelization, Amplification, and Exponential Time Simulation of Quantum Interactive Proof Systems
 In Proceedings of the 32nd ACM Symposium on Theory of Computing
, 2000
"... In this paper we consider quantum interactive proof systems, which are interactive proof systems in which the prover and verier may perform quantum computations and exchange quantum information. We prove that any polynomialround quantum interactive proof system with twosided bounded error can be p ..."
Abstract

Cited by 76 (19 self)
 Add to MetaCart
(Show Context)
In this paper we consider quantum interactive proof systems, which are interactive proof systems in which the prover and verier may perform quantum computations and exchange quantum information. We prove that any polynomialround quantum interactive proof system with twosided bounded error can be parallelized to a quantum interactive proof system with exponentially small onesided error in which the prover and verier exchange only 3 messages. This yields a simplied proof that PSPACE has 3message quantum interactive proof systems. We also prove that any language having a quantum interactive proof system can be decided in deterministic exponential time, implying that singleprover quantum interactive proof systems are strictly less powerful than multipleprover classical interactive proof systems unless EXP = NEXP. 1. INTRODUCTION Interactive proof systems were introduced by Babai [3] and Goldwasser, Micali, and Racko [17] in 1985. In the same year, Deutsch [10] gave the rst for...
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
Limits on the Power of Quantum Statistical ZeroKnowledge
, 2003
"... In this paper we propose a definition for honest verifier quantum statistical zeroknowledge interactive proof systems and study the resulting complexity class, which we denote QSZK ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
(Show Context)
In this paper we propose a definition for honest verifier quantum statistical zeroknowledge interactive proof systems and study the resulting complexity class, which we denote QSZK
The approximability of NPhard problems
 In Proceedings of the Annual ACM Symposium on Theory of Computing
, 1998
"... Many problems in combinatorial optimization are NPhard (see [60]). This has forced researchers to explore techniques for dealing with NPcompleteness. Some have considered algorithms that solve “typical” ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Many problems in combinatorial optimization are NPhard (see [60]). This has forced researchers to explore techniques for dealing with NPcompleteness. Some have considered algorithms that solve “typical”
Partially Persistent Data Structures of Bounded Degree with Constant Update Time
 NORDIC JOURNAL OF COMPUTING
, 1996
"... The problem of making bounded indegree and outdegree data structures partially persistent is considered. The node copying method of Driscoll et al. is extended so that updates can be performed in worstcase constant time on the pointer machine model. Previously it was only known to be possible in ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
The problem of making bounded indegree and outdegree data structures partially persistent is considered. The node copying method of Driscoll et al. is extended so that updates can be performed in worstcase constant time on the pointer machine model. Previously it was only known to be possible in amortised constant time [2]. The result is presented in terms of a new strategy for Dietz and Raman's dynamic two player pebble game on graphs. It is shown how to implement the strategy and the upper bound on the required number of pebbles is improved from 2b+2d+O( p b) to d+2b, where b is the bound of the indegree and d the bound of the outdegree. We also give a lower bound that shows that the number of pebbles depends on the outdegree d.
Fibrations and Calculi of Fractions
 Journal of pure and applied algebra
, 1994
"... Given a fibration E ! B and a class \Sigma of arrows of B, one can construct the free fibration (on E over B such that all reindexing functors over elements of \Sigma are equivalences. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Given a fibration E ! B and a class \Sigma of arrows of B, one can construct the free fibration (on E over B such that all reindexing functors over elements of \Sigma are equivalences.
On the Role of Algebra in the Efficient Verification of Proofs
, 1994
"... This article extracts the elements of algebra that play a central role in the design of efficient probabilistic verifiers or "probabilistically checkable proof systems (PCPs)". The main algebraic elements are lowdegree polynomials over finite fields. Their role can be broken up into three ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This article extracts the elements of algebra that play a central role in the design of efficient probabilistic verifiers or "probabilistically checkable proof systems (PCPs)". The main algebraic elements are lowdegree polynomials over finite fields. Their role can be broken up into three essential elements: 1. Their classical role in the design of errorcorrecting codes. 2. Their recently discovered property of being efficiently locally checkable. 3. The existence of characterizations via polynomials of fundamental complexity classes including NP, PSPACE and NEXPTIME.