Results 1  10
of
16
Software Reliability via RunTime ResultChecking
 JOURNAL OF THE ACM
, 1994
"... We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more ..."
Abstract

Cited by 101 (2 self)
 Add to MetaCart
We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more appropriate for use in realtime, realnumber computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: i.e., that they should be allowed to generate, preprocess, and store random bits prior to runtime, and then to use this information repeatedly in a series of runtime checks. In a case study of checking a general realnumber linear transformation (for example, a Fourier Transform), we present a simple checker which uses stored randomness, and a selfcorrector which is particularly efficient if stored randomness is allowed.
Learning polynomials with queries: The highly noisy case
, 1995
"... Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withf ..."
Abstract

Cited by 87 (18 self)
 Add to MetaCart
Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withfon a the natural phenomena underlying the input/output relationship tiny but nonnegligible fraction, , of the input space. We give a of the function, and the goal of the learner is to come up with a randomized algorithm for solving this task which accessesfas a simple explanation which best fits the examples. Therefore the black box and runs in time polynomial in1;nand exponential in best explanation may account for only part of the phenomena. d, provided is(pd=jFj). For the special case whend=1, In some situations, when the phenomena appears very irregular, we solve this problem for jFj>0. In this case the providing an explanation which fits only part of it is better than nothing. Interestingly, Kearns et. al. did not consider the use of running time of our algorithm is bounded by a polynomial queries (but rather examples drawn from an arbitrary distribuand exponential ind. Our algorithm generalizes a previously tion) as they were skeptical that queries could be of any help. known algorithm, due to Goldreich and Levin, that solves this We show that queries do seem to help (see below). task for the case whenF=GF(2)(andd=1).
Linearity testing in characteristic two
 IEEE Transactions on Information Theory
, 1996
"... The case we are interested in is when the underlying groups are G=GF(2)n and H=GF(2). In this case the collection of linear functions describe a Hadamard code of block length 2n and for an arbitrary function f mapping GF(2)n to GF(2) the distance Dist(f) measures its distance to a Hadamard code (nor ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
The case we are interested in is when the underlying groups are G=GF(2)n and H=GF(2). In this case the collection of linear functions describe a Hadamard code of block length 2n and for an arbitrary function f mapping GF(2)n to GF(2) the distance Dist(f) measures its distance to a Hadamard code (normalized so as to be a real number between 0 and 1). The quantity Err(f) is a parameter that is "easy to measure " and linearity testing studies the relationship of this parameter to the distance of f. The code and corresponding test are used in the construction of efficient probabilistically checkable proofs and thence in the derivation of hardness of approximation results. In this context, improved analyses translate into better nonapproximability results. However, while several analyses of the relation of Err(f) to Dist(f) are known, none is tight.
On the Robustness of Functional Equations
 SIAM Journal on Computing
, 1994
"... In this paper, we study the general question of how characteristics of functional equations influence whether or not they are robust. We isolate examples of properties which are necessary for the functional equations to be robust. On the other hand, we show other properties which are sufficient for ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
In this paper, we study the general question of how characteristics of functional equations influence whether or not they are robust. We isolate examples of properties which are necessary for the functional equations to be robust. On the other hand, we show other properties which are sufficient for robustness. We then study a general class of functional equations, which are of the form 8x; y F [f(x \Gamma y); f(x + y); f(x); f(y)] = 0, where F is an algebraic function. We give conditions on such functional equations that imply robustness. Our results have applications to the area of selftesting/correcting programs. We show that selftesters and selfcorrectors can be found for many functions satisfying robust functional equations, including algebraic functions of trigonometric functions such as tan x; 1 1+cotx ; Ax 1\GammaAx ; cosh x. 1 Introduction The mathematical field of functional equations is concerned with the following prototypical problem: Given a set of properties (fun...
Approximate Checking of Polynomials and Functional Equations
 PROC. 37TH FOUNDATIONS OF COMPUTER SCIENCE
, 1997
"... In this paper, we show how to check programs that compute polynomials and functions defined by addition theorems  in the realistic setting where the output of the program is approximate instead of exact. We present results showing how to perform approximate checking, selftesting, and selfcorrec ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
In this paper, we show how to check programs that compute polynomials and functions defined by addition theorems  in the realistic setting where the output of the program is approximate instead of exact. We present results showing how to perform approximate checking, selftesting, and selfcorrecting of polynomials, settling in the affirmative a question raised by [GLR + 91, RS92, RS96]. We then show how to perform approximate checking, selftesting, and selfcorrecting for those functions that satisfy addition theorems, settling a question raised by [Rub94]. In both cases, we show that the properties used to test programs for these functions are both robust (in the approximate sense) and stable. Finally, we explore the use of reductions between functional equations in the context of approximate selftesting. Our results have implications for the stability theory of functional equations.
Communication Complexity and Secure Function Evaluation
, 2001
"... A secure function evaluation protocol allows two parties to jointly compute a function f(x; y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: "any function f that can be computed using polynomial resources can be computed securely using ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
A secure function evaluation protocol allows two parties to jointly compute a function f(x; y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: "any function f that can be computed using polynomial resources can be computed securely using polynomial resources" (where `resources' refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f . Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f . For the design of efficient secure protocols we suggest two new methodologies, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching program) representation of f . We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f , enhanced with lookup tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two partici...
Testing Multivariate Linear Functions: Overcoming the Generator Bottleneck
 Proc. 27th STOC
, 1994
"... The problem of testing program correctness has received considerable attention in computer science. One approach to this problem is the notion of selftesting programs [BLR90]. Selftesting usually becomes more costly in the case of testing multivariate functions. In this paper we present efficien ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The problem of testing program correctness has received considerable attention in computer science. One approach to this problem is the notion of selftesting programs [BLR90]. Selftesting usually becomes more costly in the case of testing multivariate functions. In this paper we present efficient methods for selftesting multivariate linear functions. We then apply these methods to several multivariate linear problems to construct efficient selftesters. Cornell University. email: ergun@cs.cornell.edu. This work is supported by ONR Young Investigator Award N000149310590 1 1 Introduction Selftesting/correcting programs, which were introduced in [BLR90], are a powerful tool for attacking the problem of program correctness. Various problems have been shown to have selftesters and selfcorrectors[BLR90][BF90][Lip91][CL90][GLRSW91][RS92][RS93]. In this paper we investigate the problem of selftesting multivariate linear functions, i.e., given a multivariate linear function f a...
SpotCheckers
, 1998
"... On Labor Day weekend, the highway patrol sets up spotchecks at random points on the freeways with the intention of deterring a large fraction of motorists from driving incorrectly. We explore a very similar idea in the context of program checking to ascertain with minimal overhead that a program ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
On Labor Day weekend, the highway patrol sets up spotchecks at random points on the freeways with the intention of deterring a large fraction of motorists from driving incorrectly. We explore a very similar idea in the context of program checking to ascertain with minimal overhead that a program output is reasonably correct. Our model of spotchecking requires that the spotchecker must run asymptotically much faster than the combined length of the input and output. We then show that the spotchecking model can be applied to problems in a wide range of areas, including problems regarding graphs, sets, and algebra. In particular, we present spotcheckers for sorting, convex hull, element distinctness, set containment, set equality, total orders, and correctness of group and field operations. All of our spotcheckers are very simple to state and rely on testing that the input and/or output have certain simple properties that depend on very few bits. Our results also give propert...
Testing and Weight Distributions of Dual Codes
 Theoretical Computer Science
, 1997
"... We study the testing problem, that is, the problem of determining (maybe probabilistically) if a function to which we have oracle access satisfies a given property. We propose a framework in which to formulate and carry out the analyzes of several known tests. This framework establishes a connection ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We study the testing problem, that is, the problem of determining (maybe probabilistically) if a function to which we have oracle access satisfies a given property. We propose a framework in which to formulate and carry out the analyzes of several known tests. This framework establishes a connection between testing and the theory of weight distributions of dual codes. We illustrate this connection by giving a coding theoretic interpretation of several tests that fall under the label of lowdegree tests. We also show how the coding theoretic connection we establish naturally suggests a new way of testing for linearity over finite fields. There are two important parameters associated to every test. The first one is the test's probability of rejecting the claim that the function to which it has oracle access satisfies a given property. The second one is the distance from the oracle function to any function that satisfies the property of interest. The goal when analyzing tests is to explai...