Results 1  10
of
529
Property Testing and its connection to Learning and Approximation
"... We study the question of determining whether an unknown function has a particular property or is fflfar from any function with that property. A property testing algorithm is given a sample of the value of the function on instances drawn according to some distribution, and possibly may query the fun ..."
Abstract

Cited by 506 (69 self)
 Add to MetaCart
We study the question of determining whether an unknown function has a particular property or is fflfar from any function with that property. A property testing algorithm is given a sample of the value of the function on instances drawn according to some distribution, and possibly may query the function on instances of its choice. First, we establish some connections between property testing and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being kcolorable or having a aeclique (clique of density ae w.r.t the vertex set). Our graph property testing algorithms are probabilistic and make assertions which are correct with high probability, utilizing only poly(1=ffl) edgequeries into the graph, where ffl is the distance parameter. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph which corre...
NonDeterministic Exponential Time has TwoProver Interactive Protocols
"... We determine the exact power of twoprover interactive proof systems introduced by BenOr, Goldwasser, Kilian, and Wigderson (1988). In this system, two allpowerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the input z belongs to the language ..."
Abstract

Cited by 437 (38 self)
 Add to MetaCart
We determine the exact power of twoprover interactive proof systems introduced by BenOr, Goldwasser, Kilian, and Wigderson (1988). In this system, two allpowerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the input z belongs to the language L. It was previously suspected (and proved in a relativized sense) that coNPcomplete languages do not admit such proof systems. In sharp contrast, we show that the class of languages having twoprover interactive proof systems is nondeterministic exponential time. After the recent results that all languages in PSPACE have single prover interactive proofs (Lund, Fortnow, Karloff, Nisan, and Shamir), this represents a further step demonstrating the unexpectedly immense power of randomization and interaction in efficient provability. Indeed, it follows that multiple provers with coins are strictly stronger than without, since NEXP # NP. In particular, for the first time, provably polynomial time intractable languages turn out to admit “efficient proof systems’’ since NEXP # P. We show that to prove membership in languages in EXP, the honest provers need the power of EXP only. A consequence, linking more standard concepts of structural complexity, states that if EX P has polynomial size circuits then EXP = Cg = MA. The first part of the proof of the main result extends recent techniques of polynomial extrapolation of truth values used in the single prover case. The second part is a verification scheme for multilinearity of an nvariable function held by an oracle and can be viewed as an independent result on program verification. Its proof rests on combinatorial techniques including the estimation of the expansion rate of a graph.
Ciphertextpolicy attributebased encryption
 In Proceedings of the IEEE Symposium on Security and Privacy (To Appear
, 2007
"... ..."
Lower Bounds for Discrete Logarithms and Related Problems
, 1997
"... . This paper considers the computational complexity of the discrete logarithm and related problems in the context of "generic algorithms"that is, algorithms which do not exploit any special properties of the encodings of group elements, other than the property that each group element is ..."
Abstract

Cited by 279 (11 self)
 Add to MetaCart
(Show Context)
. This paper considers the computational complexity of the discrete logarithm and related problems in the context of "generic algorithms"that is, algorithms which do not exploit any special properties of the encodings of group elements, other than the property that each group element is encoded as a unique binary string. Lower bounds on the complexity of these problems are proved that match the known upper bounds: any generic algorithm must perform\Omega (p 1=2 ) group operations, where p is the largest prime dividing the order of the group. Also, a new method for correcting a faulty DiffieHellman oracle is presented. 1 Introduction The discrete logarithm problem plays an important role in cryptography. The problem is this: given a generator g of a cyclic group G, and an element g x in G, determine x. A related problem is the DiffieHellman problem: given g x and g y , determine g xy . In this paper, we study the computational power of "generic algorithms" that is, ...
Checking Computations in Polylogarithmic Time
, 1991
"... . Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every no ..."
Abstract

Cited by 274 (11 self)
 Add to MetaCart
. Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every nondeterministic computational task S(x; y), defined as a polynomial time relation between the instance x, representing the input and output combined, and the witness y can be modified to a task S 0 such that: (i) the same instances remain accepted; (ii) each instance/witness pair becomes checkable in polylogarithmic Monte Carlo time; and (iii) a witness satisfying S 0 can be computed in polynomial time from a witness satisfying S. Here the instance and the description of S have to be provided in errorcorrecting code (since the checker will not notice slight changes). A modification of the MIP proof was required to achieve polynomial time in (iii); the earlier technique yields N O(log log N)...
The NPcompleteness column: an ongoing guide
 JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freem ..."
Abstract

Cited by 243 (0 self)
 Add to MetaCart
(Show Context)
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freeman & Co., New York, 1979 (hereinafter referred to as "[G&J]"; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
Matching is as Easy as Matrix Inversion
, 1987
"... A new algorithm for finding a maximum matching in a general graph is presented; its special feature being that the only computationally nontrivial step required in its execution is the inversion of a single integer matrix. Since this step can be parallelized, we get a simple parallel (RNC2) algorit ..."
Abstract

Cited by 211 (7 self)
 Add to MetaCart
A new algorithm for finding a maximum matching in a general graph is presented; its special feature being that the only computationally nontrivial step required in its execution is the inversion of a single integer matrix. Since this step can be parallelized, we get a simple parallel (RNC2) algorithm. At the heart of our algorithm lies a probabilistic lemma, the isolating lemma. We show applications of this lemma to parallel computation and randomized reductions.
Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds (Extended Abstract)
, 2003
"... Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or, even, coRP ` "ffl?0NTIME(2nffl), infinitely often), then NEXP is not computable by polynomialsize arithmetic circuits. Thus, establishing that RP = coRP or BPP = P would require proving s ..."
Abstract

Cited by 189 (4 self)
 Add to MetaCart
Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or, even, coRP ` &quot;ffl?0NTIME(2nffl), infinitely often), then NEXP is not computable by polynomialsize arithmetic circuits. Thus, establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.
Improved lowdegree testing and its applications
 IN 29TH STOC
, 1997
"... NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The stro ..."
Abstract

Cited by 155 (19 self)
 Add to MetaCart
NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The strongest previously known connection for this test states that a function passes the test with probability 6 for some d> 7/8 iff the function has agreement N 6 with a polynomial of degree d. We presenta new, and surprisingly strong,analysiswhich shows thatthepreceding statementis truefor 6<<0.5. The analysis uses a version of Hilbe?l irreducibility, a tool used in the factoring of multivariate polynomials. As a consequence we obtain an alternate construction for the following proof system: A constant prover lround proof system for NP languages in which the verifier uses O(log n) random bits, receives answers of size O(log n) bits, and has an error probability of at most 2 – 10g*‘’. Such a proof system, which implies the NPhardness of approximating Set Cover to within fl(log n) factors, has already been obtained by Raz and Safra [28]. Our result was completed after we heard of their claim. A second consequence of our analysis is a self testerlcorrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on 6 fraction of inputs where 15<<0.5, then the tester/corrector determines J and generates 0(~) randomized programs, such that one of the programs is correct on every input, with high probability.
Software reliability via runtime resultchecking
 J. ACM
, 1997
"... We review the eld of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could protably be incorporated in software as an aid to ecient debugging and enhanced reliability. We consider how to modify traditional checking methodologies to make them more appropr ..."
Abstract

Cited by 121 (2 self)
 Add to MetaCart
We review the eld of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could protably be incorporated in software as an aid to ecient debugging and enhanced reliability. We consider how to modify traditional checking methodologies to make them more appropriate for use in realtime, realnumber computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: i.e., that they should be allowed to generate, preprocess, and store random bits prior to runtime, and then to use this information repeatedly in a series of runtime checks. In a case study of checking a general realnumber linear transformation (for example, a Fourier Transform), we present a simple checker which uses stored randomness, and a selfcorrector which is particularly ecient if stored