Results 11  20
of
20
A unified approach to linear probing hashing with buckets
 In preparation
"... Abstract. We give a unified analysis of linear probing hashing with a general bucket size. We use both a combinatorial approach, giving exact formulas for generating functions, and a probabilistic approach, giving simple derivations of asymptotic results. Both approaches complement nicely, and giv ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We give a unified analysis of linear probing hashing with a general bucket size. We use both a combinatorial approach, giving exact formulas for generating functions, and a probabilistic approach, giving simple derivations of asymptotic results. Both approaches complement nicely, and give a good insight in the relation between linear probing and random walks. A key methodological contribution, at the core of Analytic Combinatorics, is the use of the symbolic method (based on qcalculus) to directly derive the generating functions to analyze. 1.
Modular Enforcement of Information Flow Policies in Data Structures
"... Abstract—Standard implementations of common data structures such as hash tables can leak information, e.g. the operation history, to attackers with later access to a machine’s memory. This leakage is particularly damaging whenever the history of operations performed on a data structure must remain s ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Standard implementations of common data structures such as hash tables can leak information, e.g. the operation history, to attackers with later access to a machine’s memory. This leakage is particularly damaging whenever the history of operations performed on a data structure must remain secret, such as in voting machines. We show how unique representation—the requirement that a data structure have canonical machine representations—can be used to perform modular verification of information flow policies in programs that compose data structures with their clients. We present a compositional verification system based on Relational Hoare Type Theory (RHTT) that uses unique representation to enforce endtoend security guarantees such as noninterference for such programs. We validate our system and technique with examples drawn from arrays, multisets, hash tables, and a medical database application. The system, theorems, and examples have all been verified in Coq. I.
A Hardware Algorithm For High Speed Morpheme Extraction And Its Implementation
"... This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific miae (MEX, the t step towed eving natur 1 ping  etom. It o shows the mie's peffo, 1001,000 tim [ter th pemon computer. This mine c extrt morphem [rom 10,000 charer Jspese tt by hing 80, ..."
Abstract
 Add to MetaCart
This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific miae (MEX, the t step towed eving natur 1 ping  etom. It o shows the mie's peffo, 1001,000 tim [ter th pemon computer. This mine c extrt morphem [rom 10,000 charer Jspese tt by hing 80,000 morpheme dlctiosry in 1 coad. multiple text sre, w are comped ck er dldtes, w one text stm The rithm is fropier. ted on the mhie time for the number of csndldt, while cov tion sueti algorithms are implemented in combinational time.
ObjectOriented Languages Considered Harmful
"... Many theorists would agree that, had it not been for RAID, the renement of the Ethernet might never have occurred. After years of conrmed research into widearea networks, we validate the evaluation of extreme programming. We demonstrate that although RAID can be made metamorphic, symbiotic, and sma ..."
Abstract
 Add to MetaCart
(Show Context)
Many theorists would agree that, had it not been for RAID, the renement of the Ethernet might never have occurred. After years of conrmed research into widearea networks, we validate the evaluation of extreme programming. We demonstrate that although RAID can be made metamorphic, symbiotic, and smart, expert systems can be made robust, random, and collaborative [18]. I.
FILE ORGANIZATIONS WITH SHARED OVERFLOW BLOCKS FOR VARIABLE LENGTH OBJECTS?
, 1991
"... AbstractTraditional file organizations for records may also be appropriate for the storage and retrieval of objects. Since objects frequently involve diverse data types (such as text, compressed images, graphics, etc.) as well as composite structures, they may have a largely variable length. In thi ..."
Abstract
 Add to MetaCart
AbstractTraditional file organizations for records may also be appropriate for the storage and retrieval of objects. Since objects frequently involve diverse data types (such as text, compressed images, graphics, etc.) as well as composite structures, they may have a largely variable length. In this paper, we assume that in the case of composite objects their components are clustered together and that object file organizations have overflows. The blocks of the main file are grouped so that they share a common number of overflow blocks. For this class of tile organizations we present and analyze the performance of three different overtlow searching algorithms. We show that the third algorithm gives very significant performance advantages under certain circumstances.
Strongly History Independent Hashing with Deletion
, 2006
"... We present a strongly history independent (SHI) hash table that is fast, space efficient, and supports deletions. A hash table that supports deletions is SHI if it has a canonical memory representation up to randomness. That is, the string of random bits and current hash table contents (the set of ( ..."
Abstract
 Add to MetaCart
(Show Context)
We present a strongly history independent (SHI) hash table that is fast, space efficient, and supports deletions. A hash table that supports deletions is SHI if it has a canonical memory representation up to randomness. That is, the string of random bits and current hash table contents (the set of (key, object) pairs in the hash table) uniquely determine its layout in memory, independently of the sequence of operations from initialization to the current state. Thus, the memory representation of a SHI hash table reveals exactly the information available through the hash table interface, and nothing more. Our construction also reveals a subtle connection between history independent hashing and the GaleShapley stable marriage algorithm [7], which may be of independent interest. Additionally, we give a general technique for converting data structures with canonical representations in a pure pointer machine model into RAM data structures of comparable performance and that are SHI with high probability. Thus we develop the last ingredient necessary to efficiently implement a host of SHI data structures on a RAM. This research is supported by NSF ITR grants CCR0122581 (The Aladdin Center) and IIS0121678.
Distributional analysis of Robin Hood linear probing hashing with buckets
"... This paper presents the first distributional analysis of a linear probing hashing scheme with buckets of size b. The exact distribution of the cost of successful searches for a bαfull table is obtained, and moments and asymptotic results are derived. With the use of the Poisson transform distributi ..."
Abstract
 Add to MetaCart
This paper presents the first distributional analysis of a linear probing hashing scheme with buckets of size b. The exact distribution of the cost of successful searches for a bαfull table is obtained, and moments and asymptotic results are derived. With the use of the Poisson transform distributional results are also obtained for tables of size m and n elements. A key element in the analysis is the use of a new family of numbers that satisfies a recurrence resembling that of the Bernoulli numbers. These numbers may prove helpful in studying recurrences involving truncated generating functions, as well as in other problems related with buckets.
Vol. 00, No. 00, Month 200x, 1–15 Efficient data structures for sparse network representation
"... Modernday computers are characterized by a striking contrast between the processing power of the CPU and the latency of main memory accesses. If the data processed is both large compared to processor caches and sparse or highdimensional in nature, as is commonly the case in complex network researc ..."
Abstract
 Add to MetaCart
(Show Context)
Modernday computers are characterized by a striking contrast between the processing power of the CPU and the latency of main memory accesses. If the data processed is both large compared to processor caches and sparse or highdimensional in nature, as is commonly the case in complex network research, the main memory latency can become a performace bottleneck. In this Article, we present a cache efficient data structure, a variant of a linear probing hash table, for representing edge sets of such networks. The performance benchmarks show that it is, indeed, quite superior to its commonly used counterparts in this application. In addition, its memory footprint only exceeds the absolute minimum by a small constant factor. The practical usability of our approach has been well demonstrated in the study of very large realworld networks.
Dependent Types for Enforcement of Information Flow Policies in Data Structures
, 2012
"... Information flow policies specify how sensitive information should be contained in a system, while information erasure policies specify when such information should be removed from the system entirely. An insight of recent work is that erasure can be understood as an information flow concept: to era ..."
Abstract
 Add to MetaCart
Information flow policies specify how sensitive information should be contained in a system, while information erasure policies specify when such information should be removed from the system entirely. An insight of recent work is that erasure can be understood as an information flow concept: to erase is to place bounds on the information flowing from the erased data to the rest of the system. In this paper, we scale the state of the art in specification and enforcement of information flow and erasure policies to programs with procedures, shared local state and stateful realworld data structures such as arrays, multisets and hash tables. We formalize our work in Relational Hoare Type Theory (RHTT), an expressive, higherorder imperative language and program logic embedded in the Coq proof assistant. In the process, we come to what is perhaps a surprising conclusion: that data structures with canonical memory representations—i.e., those that are uniquely represented (UR)—are essential for the modular verification of information flow policies in languages with procedures and shared local state. As a case study, we develop and formally verify in Coq a novel UR variant of filter hash tables. We show how our UR hash table and concise formal proofs of erasure.
Published In Strongly HistoryIndependent Hashing with Applications
"... We present a strongly history independent (SHI) hash table that supports search inO(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history ind ..."
Abstract
 Add to MetaCart
(Show Context)
We present a strongly history independent (SHI) hash table that supports search inO(1) worstcase time, and insert and delete in O(1) expected time using O(n) data space. This matches the bounds for dynamic perfect hashing, and improves on the best previous results by Naor and Teague on history independent hashing, which were either weakly history independent, or only supported insertion and search (no delete) each in O(1) expected time. The results can be used to construct many other SHI data structures. We show straightforward constructions for SHI ordered dictionaries: for n keys from {1,..., nk} searches take O(log log n) worstcase time and updates (insertions and deletions) O(log log n) expected time, and for keys in the comparison model searches take O(log n) worstcase time and updates O(log n) expected time. We also describe a SHI data structure for the ordermaintenance problem. It supports comparisons inO(1) worstcase time, and updates in O(1) expected time. All structures use O(n) data space. 1