Results 1  10
of
27
Software Reliability via RunTime ResultChecking
 JOURNAL OF THE ACM
, 1994
"... We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
We review the field of resultchecking, discussing simple checkers and selfcorrectors. We argue that such checkers could profitably be incorporated in software as an aid to efficient debugging and reliable functionality. We consider how to modify traditional checking methodologies to make them more appropriate for use in realtime, realnumber computer systems. In particular, we suggest that checkers should be allowed to use stored randomness: i.e., that they should be allowed to generate, preprocess, and store random bits prior to runtime, and then to use this information repeatedly in a series of runtime checks. In a case study of checking a general realnumber linear transformation (for example, a Fourier Transform), we present a simple checker which uses stored randomness, and a selfcorrector which is particularly efficient if stored randomness is allowed.
Checking the Correctness of Memories
 Algorithmica
, 1995
"... We extend the notion of program checking to include programs which alter their environment. In particular, we consider programs which store and retrieve data from memory. The model we consider allows the checker a small amount of reliable memory. The checker is presented with a sequence of reques ..."
Abstract

Cited by 101 (12 self)
 Add to MetaCart
(Show Context)
We extend the notion of program checking to include programs which alter their environment. In particular, we consider programs which store and retrieve data from memory. The model we consider allows the checker a small amount of reliable memory. The checker is presented with a sequence of requests (online) to a data structure which must reside in a large but unreliable memory. We view the data structure as being controlled by an adversary. We want the checker to perform each operation in the input sequence using its reliable memory and the unreliable data structure so that any error in the operation of the structure will be detected by the checker with high probability. We present checkers for various data structures. We prove lower bounds of log n on the amount of reliable memory needed by these checkers where n is the size of the structure. The lower bounds are information theoretic and apply under various assumptions. We also show timespace tradeoffs for checking random access memories as a generalization of those for coherent functions. 1
The complexity of decision versus search
 SIAM Journal on Computing
, 1994
"... A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not red ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not reduce to decision. These ideas extend in a natural way to interactive proofs and program checking. Under similar assumptions we present languages in NP for which it is harder to prove membership interactively than it is to decide this membership, and languages in NP which are not checkable. Keywords: NPcompleteness, selfreducibility, interactive proofs, program checking, sparse sets,
WitnessBased Cryptographic Program Checking and Robust Function Sharing
, 1996
"... We suggest a new methodology for "result checking" that enables us to extend the notion of Blum's program result checking to the online checking of cryptographic functions. In our model, the checker not only needs to be assured of the correctness of the result but the owner of the pr ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We suggest a new methodology for "result checking" that enables us to extend the notion of Blum's program result checking to the online checking of cryptographic functions. In our model, the checker not only needs to be assured of the correctness of the result but the owner of the program needs to be sure not to give away anything but the requested result on the (authorized) input. The existing approaches for program result checking of numerical problems often ask the program a number of extra queries (different from the actual input). In the case of cryptographic functions, this may be in contradiction with the security requirement of the program owner. Additional queries, in fact, may be used to gain unauthorized advantage (for example, imagine the implications of the online checking of a decryption device that requires the decryption of extra ciphertexts). In [Blum88], the notion of a simple checker was introduced where, for the purpose of efficiency, extra queries are not allowed...
Open Questions, Talk Abstracts, and Summary of Discussions
, 1991
"... s, and Summary of Discussions Joan Feigenbaum and Michael Merritt AT&T Bell Laboratories Murray Hill, NJ 07974 The DIMACS Workshop on Distributed Computing and Cryptography was held at the Nassau Inn in Princeton, New Jersey, on October 4, 5, and 6, 1989. Participants took a critical look at the ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
s, and Summary of Discussions Joan Feigenbaum and Michael Merritt AT&T Bell Laboratories Murray Hill, NJ 07974 The DIMACS Workshop on Distributed Computing and Cryptography was held at the Nassau Inn in Princeton, New Jersey, on October 4, 5, and 6, 1989. Participants took a critical look at the results, choice of problems, guiding philosophies, research methodology, and engineering projects that currently absorb much of the effort of people working in "cryptography" and "computer system security." This report summarizes both the formal presentations and the informal discussions that took place. Section 1 contains our account of the group discussions and statements of open questions, both general and specific, that we think are important. This report on the workshop is based on our recollections, our notes, and notes taken by the graduatestudent participants; we assume responsibility for any inaccuracies in our account. Section 2 contains abstracts of the talks presented at the worksh...
Tradeoffs Between Communication Throughput and Parallel Time
, 1994
"... We study the effect of limited communication throughput on parallel computation in a setting where the number of processors is much smaller than the length of the input. Our model has p processors that communicate through a shared memory of size m. The input has size n, and can be read directly by a ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
We study the effect of limited communication throughput on parallel computation in a setting where the number of processors is much smaller than the length of the input. Our model has p processors that communicate through a shared memory of size m. The input has size n, and can be read directly by all the processors. We will be primarily interested in studying cases where n AE p AE m. As a test case we study the list reversal problem. For this problem we prove a time lower bound of \Omega\Gamma n p mp ). (A similar lower bound holds also for the problems of sorting, finding all unique elements, convolution, and universal hashing.) This result shows that limiting the communication (i.e., small m) has significant effect on parallel computation. We show an almost matching upper bound of O( n p mp log O(1) n). The upper bound requires the development of a few interesting techniques which can alleviate the limited communication in some
Rigorous Time/Space Tradeoffs for Inverting Functions
 SIAM Journal on Computing
, 2000
"... We provide rigorous time/space tradeoffs for inverting any function. Given a function f , we give a time/space tradeoff of TS q(f ), where q(f) is the probability that two random elements (taken with replacement) are mapped to the same image under f . We also give a more general tradeoff, TS ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We provide rigorous time/space tradeoffs for inverting any function. Given a function f , we give a time/space tradeoff of TS q(f ), where q(f) is the probability that two random elements (taken with replacement) are mapped to the same image under f . We also give a more general tradeoff, TS , that can invert any function at any point.
On Coherence, RandomSelfReducibility, and SelfCorrection
 In Proc. 11th Conference on Computational Complexity
, 1997
"... . We study three types of selfreducibility that are motivated by the theory of program verification. A set A is randomselfreducible if one can determine whether an input x is in A by making random queries to an Aoracle. The distribution of each query may depend only on the length of x. A set B i ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
. We study three types of selfreducibility that are motivated by the theory of program verification. A set A is randomselfreducible if one can determine whether an input x is in A by making random queries to an Aoracle. The distribution of each query may depend only on the length of x. A set B is selfcorrectable over a distribution D if one can convert a program that is correct on most of the probability mass of D to a probabilistic program that is correct everywhere. A set C is coherent if one can determine whether an input x is in C by asking questions to an oracle for C \Gamma fxg. We first show that adaptive coherence is more powerful than nonadaptive coherence, even if the nonadaptive querier is nonuniform. Blum et al. [Blum, Luby and Rubinfeld, Journal of Computer and System Sciences, 59:549595, 1993] showed that every randomselfreducible function is selfcorrectable. It is unknown, however, whether selfcorrectability implies randomselfreducibility. We show, under ...
A Short History of Computational Complexity
 IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quit ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quite young eld
Separating Complexity Classes using Autoreducibility
, 1998
"... A set is autoreducible if it can be reduced to itself by a Turing machine that does not ask its own input to the oracle. We use autoreducibility to separate the polynomialtime hierarchy from exponential space by showing that all Turingcomplete sets for certain levels of the exponentialtime hie ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A set is autoreducible if it can be reduced to itself by a Turing machine that does not ask its own input to the oracle. We use autoreducibility to separate the polynomialtime hierarchy from exponential space by showing that all Turingcomplete sets for certain levels of the exponentialtime hierarchy are autoreducible but there exists some Turingcomplete set for doubly exponential space that is not. Although we already knew how to separate these classes using diagonalization, our proofs separate classes solely by showing they have dierent structural properties, thus applying Post's Program to complexity theory. We feel such techniques may prove unknown separations in the future. In particular, if we could settle the question as to whether all Turingcomplete sets for doubly exponential time are autoreducible, we would separate either polynomial time from polynomial space, and nondeterministic logarithmic space from nondeterministic polynomial time, or else the polynomial...