Results 1  10
of
17
Software Reliability via RunTime ResultChecking
 JOURNAL OF THE ACM
, 1994
"... We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more ..."
Abstract

Cited by 101 (2 self)
 Add to MetaCart
We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more appropriate for use in realtime, realnumber computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: i.e., that they should be allowed to generate, preprocess, and store random bits prior to runtime, and then to use this information repeatedly in a series of runtime checks. In a case study of checking a general realnumber linear transformation (for example, a Fourier Transform), we present a simple checker which uses stored randomness, and a selfcorrector which is particularly efficient if stored randomness is allowed.
A fast randomized algorithm for the approximation of matrices
, 2007
"... We introduce a randomized procedure that, given an m×n matrix A and a positive integer k, approximates A with a matrix Z of rank k. The algorithm relies on applying a structured l × m random matrix R to each column of A, where l is an integer near to, but greater than, k. The structure of R allows u ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
We introduce a randomized procedure that, given an m×n matrix A and a positive integer k, approximates A with a matrix Z of rank k. The algorithm relies on applying a structured l × m random matrix R to each column of A, where l is an integer near to, but greater than, k. The structure of R allows us to apply it to an arbitrary m × 1 vector at a cost proportional to m log(l); the resulting procedure can construct a rankk approximation Z from the entries of A at a cost proportional to mn log(k)+l 2 (m+n). We prove several bounds on the accuracy of the algorithm; one such bound guarantees that the spectral norm ‖A − Z ‖ of the discrepancy between A and Z is of the same order as √ max{m, n} times the (k + 1) st greatest singular value σk+1 of A, with small probability of large deviations. In contrast, the classical pivoted “Q R ” decomposition algorithms (such as GramSchmidt or Householder) require at least kmn floatingpoint operations in order to compute a similarly accurate rankk approximation. In practice, the algorithm of this paper is faster than the classical algorithms, as long as k is neither very small nor very large. Furthermore, the algorithm operates reliably independently of the structure of the matrix A, can access each column of A independently and at most twice, and parallelizes naturally. The results are illustrated via several numerical examples.
On the Robustness of Functional Equations
 SIAM Journal on Computing
, 1994
"... In this paper, we study the general question of how characteristics of functional equations influence whether or not they are robust. We isolate examples of properties which are necessary for the functional equations to be robust. On the other hand, we show other properties which are sufficient for ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
In this paper, we study the general question of how characteristics of functional equations influence whether or not they are robust. We isolate examples of properties which are necessary for the functional equations to be robust. On the other hand, we show other properties which are sufficient for robustness. We then study a general class of functional equations, which are of the form 8x; y F [f(x \Gamma y); f(x + y); f(x); f(y)] = 0, where F is an algebraic function. We give conditions on such functional equations that imply robustness. Our results have applications to the area of selftesting/correcting programs. We show that selftesters and selfcorrectors can be found for many functions satisfying robust functional equations, including algebraic functions of trigonometric functions such as tan x; 1 1+cotx ; Ax 1\GammaAx ; cosh x. 1 Introduction The mathematical field of functional equations is concerned with the following prototypical problem: Given a set of properties (fun...
Reflections on the Pentium Division Bug
, 1995
"... We review the field of resultchecking and suggest that it be extended to a methodology for enforcing hardware/software reliability. We thereby formulate a vision for "selfmonitoring" hardware/software whose reliability is augmented through embedded suites of runtime correctness checkers. In parti ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We review the field of resultchecking and suggest that it be extended to a methodology for enforcing hardware/software reliability. We thereby formulate a vision for "selfmonitoring" hardware/software whose reliability is augmented through embedded suites of runtime correctness checkers. In particular, we suggest that embedded checkers and correctors may be employed to safeguard against arithmetic errors such as that which has bedeviled the Intel Pentium Microprocessor. We specify checkers and correctors suitable for monitoring the multiplication and division functionalities of an arbitrary arithmetic processor and seamlessly correcting erroneous output which may occur for any reason during the lifetime of the chip.
Indexing Information for Data Forensics
, 2005
"... We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked li ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked lists, binary search trees, skip lists, and hash tables. Some of our constructions are based on a new reducedrandomness construction for nonadaptive combinatorial group testing.
Approximate Checking of Polynomials and Functional Equations
 PROC. 37TH FOUNDATIONS OF COMPUTER SCIENCE
, 1997
"... In this paper, we show how to check programs that compute polynomials and functions defined by addition theorems  in the realistic setting where the output of the program is approximate instead of exact. We present results showing how to perform approximate checking, selftesting, and selfcorrec ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
In this paper, we show how to check programs that compute polynomials and functions defined by addition theorems  in the realistic setting where the output of the program is approximate instead of exact. We present results showing how to perform approximate checking, selftesting, and selfcorrecting of polynomials, settling in the affirmative a question raised by [GLR + 91, RS92, RS96]. We then show how to perform approximate checking, selftesting, and selfcorrecting for those functions that satisfy addition theorems, settling a question raised by [Rub94]. In both cases, we show that the properties used to test programs for these functions are both robust (in the approximate sense) and stable. Finally, we explore the use of reductions between functional equations in the context of approximate selftesting. Our results have implications for the stability theory of functional equations.
SelfTesting Without The Generator Bottleneck
 SIAM J. on Computing
, 1995
"... Suppose P is a program designed to compute a function f defined on a group G. The task of selftesting P , that is, testing if P computes f correctly on most inputs, usually involves testing explicitly if P computes f correctly on every generator of G. In the case of multivariate functions, the numb ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Suppose P is a program designed to compute a function f defined on a group G. The task of selftesting P , that is, testing if P computes f correctly on most inputs, usually involves testing explicitly if P computes f correctly on every generator of G. In the case of multivariate functions, the number of generators, and hence the number of such tests, becomes prohibitively large. We refer to this problem as the generator bottleneck . We develop a technique that can be used to overcome the generator bottleneck for functions that have a certain nice structure, specifically if the relationship between the values of the function on the set of generators is easily checkable. Using our technique, we build the first efficient selftesters for many linear, multilinear, and some nonlinear functions. This includes the FFT, and various polynomial functions. All of the selftesters we present make only O(1) calls to the program that is being tested. As a consequence of our techniques, we also obtain efficient program resultcheckers for all these problems.
Exact and approximate testing/correcting of algebraic functions: A survey
 Electronic Colloq. on Comp. Compl., Univ. of Trier TR2001014
, 2001
"... In the late 80’s Blum, Luby, Rubinfeld, Kannan et al. pioneered the theory of self–testing as an alternative way of dealing with the problem of software reliability. Over the last decade this theory played a crucial role in the construction of probabilistically checkable proofs and the derivation ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In the late 80’s Blum, Luby, Rubinfeld, Kannan et al. pioneered the theory of self–testing as an alternative way of dealing with the problem of software reliability. Over the last decade this theory played a crucial role in the construction of probabilistically checkable proofs and the derivation of hardness of approximation results. Applications in areas like computer vision, machine learning, and self–correcting programs were also established. In the self–testing problem one is interested in determining (maybe probabilistically) whether a function to which one has oracle access satisfies a given property. We consider the problem of testing algebraic functions and survey over a decade of research in the area. Special emphasis is given to illustrate the scenario where the problem takes place and to the main techniques used in the analysis of tests. A novel aspect of this work is the separation it advocates between the mathematical and algorithmic issues that arise in the theory of self–testing.
An Empirical Comparison between Direct and Indirect Test Result Checking Approaches
 Proceedings of the Third International Workshop on Software Quality Assurance (SOQUA 2006) (in conjunction with the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT 2006/FSE14))
, 2006
"... An oracle in software testing is a mechanism for checking whether the system under test has behaved correctly for any executions. In some situations, oracles are unavailable or too expensive to apply. This is known as the oracle problem. It is crucial to develop techniques to address it, and metamor ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
An oracle in software testing is a mechanism for checking whether the system under test has behaved correctly for any executions. In some situations, oracles are unavailable or too expensive to apply. This is known as the oracle problem. It is crucial to develop techniques to address it, and metamorphic testing (MT) was one of such proposals. This paper conducts a controlled experiment to investigate the cost effectiveness of using MT by 38 testers on three opensource programs. The fault detection capability and time cost of MT are compared with the popular assertion checking method. Our results show that MT is costefficient and has potentials for detecting more faults than the assertion checking method.
Multilinearity selftesting with relative error
 In Proc. 17th STACS, LNCS 1770
, 2000
"... Abstract. We investigate selftesting programs with relative error by allowing error terms proportional to the function to be computed. Until now, in numerical computation, error terms were assumed to be either constant or proportional to the pth power of the magnitude of the input, for p ∈ [0, 1). ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. We investigate selftesting programs with relative error by allowing error terms proportional to the function to be computed. Until now, in numerical computation, error terms were assumed to be either constant or proportional to the pth power of the magnitude of the input, for p ∈ [0, 1). We construct new selftesters with relative error for realvalued multilinear functions defined over finite rational domains. The existence of such selftesters positively solves an open question in [KMS99]. Moreover, our selftesters are very efficient: they use few queries and simple operations.