Results 1  10
of
60
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 721 (46 self)
 Add to MetaCart
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof " with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [6] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige, Goldwasser, Lovász, Safra and Szegedy [42], and Arora and Safra [6] and shows that there exists a positive ɛ such that approximating the maximum clique size in an Nvertex graph to within a factor of N ɛ is NPhard.
SelfTesting/Correcting with Applications to Numerical Problems
, 1990
"... Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute ..."
Abstract

Cited by 348 (27 self)
 Add to MetaCart
Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by NSF Grant No. CCR 8813632. y International Computer Science Institute, Berkeley, California 94704 z Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by an IBM Graduate Fellowship and NSF Grant No. CCR 8813632. the original running time of P . We present general techniques for constructing simple to program selftesting /correcting pairs for a variety of numerical problems, including integer multiplication, modular multiplication, matrix multiplicatio...
Improved lowdegree testing and its applications
 IN 29TH STOC
, 1997
"... NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The stro ..."
Abstract

Cited by 146 (19 self)
 Add to MetaCart
NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The strongest previously known connection for this test states that a function passes the test with probability 6 for some d> 7/8 iff the function has agreement N 6 with a polynomial of degree d. We presenta new, and surprisingly strong,analysiswhich shows thatthepreceding statementis truefor 6<<0.5. The analysis uses a version of Hilbe?l irreducibility, a tool used in the factoring of multivariate polynomials. As a consequence we obtain an alternate construction for the following proof system: A constant prover lround proof system for NP languages in which the verifier uses O(log n) random bits, receives answers of size O(log n) bits, and has an error probability of at most 2 – 10g*‘’. Such a proof system, which implies the NPhardness of approximating Set Cover to within fl(log n) factors, has already been obtained by Raz and Safra [28]. Our result was completed after we heard of their claim. A second consequence of our analysis is a self testerlcorrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on 6 fraction of inputs where 15<<0.5, then the tester/corrector determines J and generates 0(~) randomized programs, such that one of the programs is correct on every input, with high probability.
The art of uninformed decisions: A primer to property testing
 Science
, 2001
"... Property testing is a new field in computational theory, that deals with the information that can be deduced from the input where the number of allowable queries (reads from the input) is significally smaller than its size. ..."
Abstract

Cited by 134 (21 self)
 Add to MetaCart
Property testing is a new field in computational theory, that deals with the information that can be deduced from the input where the number of allowable queries (reads from the input) is significally smaller than its size.
Fast batch verification for modular exponentiation and digital signatures
, 1998
"... Abstract Many tasks in cryptography (e.g., digital signature verification) call for verification of a basicoperation like modular exponentiation in some group: given ( g, x, y) check that gx = y. Thisis typically done by recomputing gx and checking we get y. We would like to do it differently,and f ..."
Abstract

Cited by 132 (2 self)
 Add to MetaCart
Abstract Many tasks in cryptography (e.g., digital signature verification) call for verification of a basicoperation like modular exponentiation in some group: given ( g, x, y) check that gx = y. Thisis typically done by recomputing gx and checking we get y. We would like to do it differently,and faster. The approach we use is batching. Focusing first on the basic modular exponentiation operation, we provide some probabilistic batch verifiers, or tests, that verify a sequence of modular exponentiations significantly faster than the naive recomputation method. This yields speedupsfor several verification tasks that involve modular exponentiations.
Exponential lower bound for 2query locally decodable codes via a quantum argument
 Journal of Computer and System Sciences
, 2003
"... Abstract A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
Abstract A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 \Omega (n). Previously this was known only for linear codes (Goldreich et al. 02). The
Property Testing in Bounded Degree Graphs
 Algorithmica
, 1997
"... We further develop the study of testing graph properties as initiated by Goldreich, Goldwasser and Ron. Whereas they view graphs as represented by their adjacency matrix and measure distance between graphs as a fraction of all possible vertex pairs, we view graphs as represented by boundedlength in ..."
Abstract

Cited by 120 (36 self)
 Add to MetaCart
We further develop the study of testing graph properties as initiated by Goldreich, Goldwasser and Ron. Whereas they view graphs as represented by their adjacency matrix and measure distance between graphs as a fraction of all possible vertex pairs, we view graphs as represented by boundedlength incidence lists and measure distance between graphs as a fraction of the maximum possible number of edges. Thus, while the previous model is most appropriate for the study of dense graphs, our model is most appropriate for the study of boundeddegree graphs. In particular, we present randomized algorithms for testing whether an unknown boundeddegree graph is connected, kconnected (for k ? 1), planar, etc. Our algorithms work in time polynomial in 1=ffl, always accept the graph when it has the tested property, and reject with high probability if the graph is fflaway from having the property. For example, the 2Connectivity algorithm rejects (w.h.p.) any Nvertex ddegree graph for which more ...
Software Reliability via RunTime ResultChecking
 JOURNAL OF THE ACM
, 1994
"... We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more ..."
Abstract

Cited by 102 (2 self)
 Add to MetaCart
We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more appropriate for use in realtime, realnumber computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: i.e., that they should be allowed to generate, preprocess, and store random bits prior to runtime, and then to use this information repeatedly in a series of runtime checks. In a case study of checking a general realnumber linear transformation (for example, a Fourier Transform), we present a simple checker which uses stored randomness, and a selfcorrector which is particularly efficient if stored randomness is allowed.
Learning polynomials with queries: The highly noisy case
, 1995
"... Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withf ..."
Abstract

Cited by 87 (19 self)
 Add to MetaCart
Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withfon a the natural phenomena underlying the input/output relationship tiny but nonnegligible fraction, , of the input space. We give a of the function, and the goal of the learner is to come up with a randomized algorithm for solving this task which accessesfas a simple explanation which best fits the examples. Therefore the black box and runs in time polynomial in1;nand exponential in best explanation may account for only part of the phenomena. d, provided is(pd=jFj). For the special case whend=1, In some situations, when the phenomena appears very irregular, we solve this problem for jFj>0. In this case the providing an explanation which fits only part of it is better than nothing. Interestingly, Kearns et. al. did not consider the use of running time of our algorithm is bounded by a polynomial queries (but rather examples drawn from an arbitrary distribuand exponential ind. Our algorithm generalizes a previously tion) as they were skeptical that queries could be of any help. known algorithm, due to Goldreich and Levin, that solves this We show that queries do seem to help (see below). task for the case whenF=GF(2)(andd=1).
Regular Languages are Testable with a Constant Number of Queries
 SIAM Journal on Computing
, 1999
"... We continue the study of combinatorial property testing, initiated by Goldreich, Goldwasser and Ron in [7]. The subject of this paper is testing regular languages. Our main result is as follows. For a regular language L 2 f0; 1g and an integer n there exists a randomized algorithm which always acc ..."
Abstract

Cited by 79 (20 self)
 Add to MetaCart
We continue the study of combinatorial property testing, initiated by Goldreich, Goldwasser and Ron in [7]. The subject of this paper is testing regular languages. Our main result is as follows. For a regular language L 2 f0; 1g and an integer n there exists a randomized algorithm which always accepts a word w of length n if w 2 L, and rejects it with high probability if w has to be modified in at least n positions to create a word in L. The algorithm queries ~ O(1=) bits of w. This query complexity is shown to be optimal up to a factor polylogarithmic in 1=. We also discuss testability of more complex languages and show, in particular, that the query complexity required for testing contextfree languages cannot be bounded by any function of . The problem of testing regular languages can be viewed as a part of a very general approach, seeking to probe testability of properties defined by logical means. 1