Results 1  10
of
11
Improved Probabilistic Verification by Hash Compaction
 In Advanced Research Working Conference on Correct Hardware Design and Verification Methods
, 1995
"... . We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This metho ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
. We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This method saves space but allows a nonzero probability of omitting states during verification, which may cause verification to miss design errors (i.e. verification may produce "false positives"). Our method improves on Wolper and Leroy's by calculating the hash and compressed values independently, and by using a specific hashing scheme that requires a low number of probes in the hash table. The result is a large reduction in the probability of omitting a state. Hence, we can achieve a given upper bound on the probability of omitting a state using fewer bits per compressed state. For example, we can reduce the number of bytes stored for each state from the eight recommended by Wolper and Leroy to o...
Antipersistence: History independent data structures
 IN STOC ’01: PROCEEDINGS OF THE THIRTYTHIRD ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2001
"... Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its "legitimate" interface. Word pro ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its "legitimate" interface. Word processors that quietly maintain old versions of a document are merely the most egregious example of a general problem. We deal with data structures whose current memory representation does not reveal their history. We focus on dictionaries, where this means revealing nothing about the order of insertions or deletions. Our first algorithm is a hash table based on open addressing, allowing O(1) insertion and search. We also present a history independent dynamic perfect hash table that uses space linear in the number of elements inserted and has expected amortized insertion and deletion time O(1). To solve the dynamic perfect hashing problem we devise a general scheme for history independent memory allocation. For fixedsize records this is quite efficient, with insertion and deletion both linear in the size of the record. Our variablesize record scheme is efficient enough for dynamic perfect hashing but not for general use. The main open problem we leave is whether it is possible to implement a variablesize record scheme with low overhead.
A New Scheme for MemoryEfficient Probabilistic Verification
 in IFIP TC6/WG6.1 Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, and Protocol Specification, Testing, and Verification
, 1996
"... In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each s ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods  bitstate hashing and hash compaction  have been proposed in the literature that store much fewer bits for each state but come at the price of some probability that not all reachable states will be explored during the search, and that the verifier may thus produce false positives. Holzmann introduced bitstate hashing and derived an approximation formula for the average probability that a particular state is not omitted during the search, but this formula does not give a bound on the probability of false positives. In contrast, the analysis for hash compaction, introduced by Wolper and Leroy and improved upon by Stern and Dill, yielded a bound on the probability that not even one state is omitted during the search, thus providing a bound on the probability of false positives. In this paper, we propose a...
Strongly historyindependent hashing with applications
 In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science
, 2007
"... We present a strongly history independent (SHI) hash table that supports search in O(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history ind ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We present a strongly history independent (SHI) hash table that supports search in O(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history independent hashing, which were either weakly history independent, or only supported insertion and search (no delete) each in O(1) expected time. The results can be used to construct many other SHI data structures. We show straightforward constructions for SHI ordered dictionaries: for n keys from {1,..., n k} searches take O(log log n) worstcase time and updates (insertions and deletions) O(log log n) expected time, and for keys in the comparison model searches take O(log n) worstcase time and updates O(log n) expected time. We also describe a SHI data structure for the ordermaintenance problem. It supports comparisons in O(1) worstcase time, and updates in O(1) expected time. All structures use O(n) data space. 1
Bonsai: A Compact Representation of Trees
, 1993
"... This paper shows how trees can be stored in a very compact form, called `Bonsai', using hash tables. A method is described that is suitable for large trees that grow monotonically within a predefined maximum size limit. Using it, pointers in any tree can be represented within 6 +log 2 n bits per nod ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper shows how trees can be stored in a very compact form, called `Bonsai', using hash tables. A method is described that is suitable for large trees that grow monotonically within a predefined maximum size limit. Using it, pointers in any tree can be represented within 6 +log 2 n bits per node where n is the maximum number of children a node can have. We first describe a general way of storing trees in hash tables, and then introduce the idea of compact hashing which underlies the Bonsai structure. These two techniques are combined to give a compact representation of trees, and a practical methodology is set out to permit the design of these structures. The new representation is compared with two conventional tree implementations in terms of the storage required per node. Examples of programs that must store large trees within a strict maximum size include those that operate on trie structures derived from natural language text. We describe how the Bonsai technique has been applied to the trees that arise in text compression and adaptive prediction, and include a discussion of the design parameters that work well in practice
Algorithmic Techniques in Verification by Explicit State Enumeration
, 1997
"... Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all possible interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks a ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Modern digital systems often employ sophisticated protocols. Unfortunately, designing correct protocols is a subtle art. Even when using great care, a designer typically cannot foresee all possible interactions among the components of the system; thus, bugs like subtle race conditions or deadlocks are easily overlooked. One way a computer can support the designer is by simulating random executions of the system. There is, however, a high probability of missing executions containing errors  especially in complex systems  using this simulation approach. In contrast, an automatic verifier tries to examine all states reachable from a given set of startstates. The biggest obstacle in this exhaustive approach is that often there is a very large number of reachable states. This thesis describes three techniques to increase the size of the reachable state spaces that can be handled in automatic verifiers. The techniques work in verifiers that are based on explicitly storing each reachable ...
Reliable Probabilistic Verification Using Hash Compaction
"... This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in verification by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. Ho ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in verification by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. However, this probability can be made negligibly small by using typically 40 bits of memory per state. From this point of view, this new scheme improves substantially on Holzmann's bitstate hashing, which has a high probability of producing an incomplete search even when using close to 1000 bits per state. The proposed scheme has been implemented in the contexts of the SPIN and Mur' verification systems. Experiments on sample protocols nicely match the predictions of the analysis. For large protocols, memory savings of two orders of magnitude are obtained. We also show how to efficiently combine the new scheme with state space caching, and we analyze bitstate hashing in order to compare it wit...
On Worst Case RobinHood Hashing
 SIAM J. Computing
, 2004
"... We consider open addressing hashing and implement it by using the Robin Hood strategy; that is, in case of collision, the element that has traveled the farthest can stay in the slot. We hash ∼ αn elements into a table of size n where each probe is independent and uniformly distributed over the tab ..."
Abstract
 Add to MetaCart
We consider open addressing hashing and implement it by using the Robin Hood strategy; that is, in case of collision, the element that has traveled the farthest can stay in the slot. We hash ∼ αn elements into a table of size n where each probe is independent and uniformly distributed over the table, and α<1 is a constant. Let Mn be the maximum search time for any of the elements in the table. We show that with probability tending to one, Mn ∈ [log 2 log n + σ, log 2 log n + τ] for some constants σ, τ depending upon α only. This is an exponential improvement over the maximum search time in case of the standard FCFS (first come first served) collision strategy and virtually matches the performance of multiplechoice hash methods.
Modular Enforcement of Information Flow Policies in Data Structures
"... Abstract—Standard implementations of common data structures such as hash tables can leak information, e.g. the operation history, to attackers with later access to a machine’s memory. This leakage is particularly damaging whenever the history of operations performed on a data structure must remain s ..."
Abstract
 Add to MetaCart
Abstract—Standard implementations of common data structures such as hash tables can leak information, e.g. the operation history, to attackers with later access to a machine’s memory. This leakage is particularly damaging whenever the history of operations performed on a data structure must remain secret, such as in voting machines. We show how unique representation—the requirement that a data structure have canonical machine representations—can be used to perform modular verification of information flow policies in programs that compose data structures with their clients. We present a compositional verification system based on Relational Hoare Type Theory (RHTT) that uses unique representation to enforce endtoend security guarantees such as noninterference for such programs. We validate our system and technique with examples drawn from arrays, multisets, hash tables, and a medical database application. The system, theorems, and examples have all been verified in Coq. I.
A Hardware Algorithm For High Speed Morpheme Extraction And Its Implementation
"... This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific miae (MEX, the t step towed eving natur 1 ping  etom. It o shows the mie's peffo, 1001,000 tim [ter th pemon computer. This mine c extrt morphem [rom 10,000 charer Jspese tt by hing 80,000 m ..."
Abstract
 Add to MetaCart
This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific miae (MEX, the t step towed eving natur 1 ping  etom. It o shows the mie's peffo, 1001,000 tim [ter th pemon computer. This mine c extrt morphem [rom 10,000 charer Jspese tt by hing 80,000 morpheme dlctiosry in 1 coad. multiple text sre, w are comped ck er dldtes, w one text stm The rithm is fropier. ted on the mhie time for the number of csndldt, while cov tion sueti algorithms are implemented in combinational time.