Results 11 
16 of
16
Efficient authenticated data structures for graph connectivity and geometric search problems. Algorithmica
, 2010
"... Authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being controlled by a remote untrusted host. In this paper we present efficient techniques for authenticating data structures that represent graphs and ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being controlled by a remote untrusted host. In this paper we present efficient techniques for authenticating data structures that represent graphs and collections of geometric objects. We use a dataquerying model where a data structure maintained by a trusted source is mirrored at distributed untrusted servers, called responders, with the responders answering queries made by users: when a user queries a responder, along with the answer to the issued query, he receives a cryptographic proof that allows the verification of the answer trusting only a short statement (digest) signed by the source. We introduce the path hash accumulator, a new primitive based on cryptographic hashing for efficiently authenticating various properties of structured data represented as paths, including any decomposable query over sequences of elements. We show how to employ our primitive to authenticate queries about properties of paths in graphs and search queries on multicatalogs. This allows the design of new, efficient authenticated data structures for fundamental problems on networks, such as path and connectivity queries over graphs, and complex queries on twodimensional geometric objects, such as intersection and containment queries.
Checking and Certifying Computational Results
, 1994
"... For many years, there has been tremendous interest in methods to make computation more reliable. In this thesis, we explore various techniques that can be implemented in software to help insure the correctness of the output of a program. The basic tool we use is a generalization of the notion of a p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
For many years, there has been tremendous interest in methods to make computation more reliable. In this thesis, we explore various techniques that can be implemented in software to help insure the correctness of the output of a program. The basic tool we use is a generalization of the notion of a program checker called a certifier. A certifier is given intermediate computations from a program computing an answer in an effort to simplify the checking process. The certifier is constructed in such a way that even if the intermediate computations it is given are incorrect, the certifier will never accept an incorrect output. We have constructed certifiers and program checkers for several common abstract data types including mergeable priority queues and splittable priority queues. We have also constructed a certifier for an abstract data type that allows approximate nearest neighbor queries to be performed efficiently. We have implemented and experimentally evaluated some of these algorithms. In the parallel domain, we have developed both general and problem specific techniques for certifying parallel computation. Lastly, we have formally proven correct a certifier for sorting, and have analyzed the advantages of using certifiers in conjunction with formal program verification techniques. This work forms a thesis presented by Jonathan D. Bright to the faculty of the Department of Computer Science, at the Johns Hopkins University, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, under the supervision of Professor Gregory F. Sullivan. iii Acknowledgements I would like to thank my advisor, Gregory Sullivan, for giving me an excellent research topic for my thesis, and for vastly improving my writing skills during my stay at Hopkins. Also, ...
BlackBox Correctness Tests for Basic Parallel Data Structures ∗
"... Abstract. Operations on basic data structures such as queues, priority queues, stacks, and counters can dominate the execution time of a parallel program due to both their frequency and their coordination and contention overheads. There are considerable performance payoffs in developing highly optim ..."
Abstract
 Add to MetaCart
Abstract. Operations on basic data structures such as queues, priority queues, stacks, and counters can dominate the execution time of a parallel program due to both their frequency and their coordination and contention overheads. There are considerable performance payoffs in developing highly optimized, asynchronous, distributed, cacheconscious, parallel implementations of such data structures. Such implementations may employ a variety of tricks to reduce latencies and avoid serial bottlenecks, as long as the semantics of the data structure are preserved. The complexity of the implementation and the difficulty in reasoning about asynchronous systems increases concerns regarding possible bugs in the implementation. In this paper we consider postmortem, blackbox procedures for testing whether a parallel data structure behaved correctly. We present the first systematic study of algorithms and hardness results for such testing procedures, focusing on queues, priority queues, stacks, and counters, under various important scenarios. Our results demonstrate the importance of selecting test data such that distinct values are inserted into the data structure (as appropriate). In such cases we present an O(n) time algorithm for testing linearizable queues, an O(n log n) time algorithm for testing linearizable priority queues, and an O(np2) time algorithm for testing sequentially consistent queues, where n is the number of data structure operations and p is the
Certifying Algorithms
"... A certifying algorithm is an algorithm that produces, with each output, a certificate or witness (easytoverify proof) that the particular output has not been compromised by a bug. A user of a certifying algorithm inputs x, receives the output y and the certificate w, and then checks, either manual ..."
Abstract
 Add to MetaCart
A certifying algorithm is an algorithm that produces, with each output, a certificate or witness (easytoverify proof) that the particular output has not been compromised by a bug. A user of a certifying algorithm inputs x, receives the output y and the certificate w, and then checks, either manually or by use of a program, that w proves that y is a correct output for input x. In this way, he/she can be sure of the correctness of the output without having to trust the algorithm. We put forward the thesis that certifying algorithms are much superior to noncertifying algorithms, and that for complex algorithmic tasks, only certifying algorithms are satisfactory. Acceptance of this thesis would lead to a change of how algorithms are taught and how algorithms are researched. The widespread use of certifying algorithms would greatly enhance the reliability of algorithmic software. We survey the state of the art in certifying algorithms and add to it. In particular, we start a