Results 1 
6 of
6
Hash Tables With Finite Buckets Are Less Resistant To Deletions
"... Abstract — We show that when memory is bounded, i.e. buckets are finite, dynamic hash tables that allow insertions and deletions behave significantly worse than their static counterparts that only allow insertions. This behavior differs from previous results in which, when memory is unbounded, the t ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — We show that when memory is bounded, i.e. buckets are finite, dynamic hash tables that allow insertions and deletions behave significantly worse than their static counterparts that only allow insertions. This behavior differs from previous results in which, when memory is unbounded, the two models behave similarly. We show the decrease in performance in dynamic hash tables using several hashtable schemes. We also provide tight upper and lower bounds on the achievable overflow fractions in these schemes. Finally, we propose an architecture with contentaddressable memory (CAM), which mitigates this decrease in performance. A. Background I.
Jittering Broadcast Transmissions in MANETs: Quantification and Implementation Strategies
"... Abstract—Delaying the transmission time of messages for a short random period, a.k.a. jittering, is a known approach for preventing concurrent transmissions, and consequently collisions, in multiple access networks (e.g., RFC 5148). It is particularly useful for increasing the reliability of unackno ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—Delaying the transmission time of messages for a short random period, a.k.a. jittering, is a known approach for preventing concurrent transmissions, and consequently collisions, in multiple access networks (e.g., RFC 5148). It is particularly useful for increasing the reliability of unacknowledged broadcast messages in the 802.11 protocol. Yet, transmission jittering comes with a price. It increases the average endtoend latency and could potentially decrease the throughput of the network. This paper investigates the relation between the maximal jitter duration and the probability of successful transmissions. Specifically, we propose and compare three implementation strategies: inside the MAC protocol, in the IP layer, and a crosslayer implementation. The comparison establishes that our practical crosslayer implementation minimizes the suggested protocol changes to 802.11 MAC protocol, while incurring the smallest latency, thus being the most attractive one. In fact, under IEEE 802.11e extension, our implementation does not require any MAC change. Our results show that by carefully designing the system, a very small jitter value of hundreds of microseconds suffices to obtain a high transmission success probability, a network utilization factor of above 70%, and subsequently to ensure high flooding reliability. This is in contrast with prior works, which have suggested to use two orders of magnitude larger jitter (i.e., tens of milliseconds). All our results are backed up by extensive simulations. I.
Optimal Dynamic Hash Tables
"... Abstract—Hashbased data structures, which use randomization in order to represent efficiently a list of elements, are one of the mostused data structures in networking applications, where both time and fast memory are scarce resources. This paper investigates the realistic scenario in which elemen ..."
Abstract
 Add to MetaCart
Abstract—Hashbased data structures, which use randomization in order to represent efficiently a list of elements, are one of the mostused data structures in networking applications, where both time and fast memory are scarce resources. This paper investigates the realistic scenario in which elements are not only added to the data structure but also deleted. We show that when the memory is bounded, dynamic hashtables with deletions behave significantly worse than their static counterparts. This is contrast with previous results that show that when the memory is not bounded the two models behave practically the same. We provide tight upper and lower bounds on the achievable overflow fraction of the scheme under various models and system parameters. Then, we propose two architectures using CAMs and TCAMs that allow us to mitigate this decrease in performance. Our analytical results are confirmed using simulations with reallife traces and real hashfunctions. A. Background I.
EnergyConstrained Balancing
"... Abstract—This paper defines and analyzes a fundamental energyconstrained balancing problem, in which elements need to be balanced across resources in order to minimize the increasing convex cost function associated with the load at each resource. However, the balancing operation needs to satisfy av ..."
Abstract
 Add to MetaCart
Abstract—This paper defines and analyzes a fundamental energyconstrained balancing problem, in which elements need to be balanced across resources in order to minimize the increasing convex cost function associated with the load at each resource. However, the balancing operation needs to satisfy average and instantaneous constraints on the energy associated with checking the current load of the many resources. In the paper, we first show tight lower and upper bounds on the solution of the problem depending on the specific system parameters. Then, we explain how these solutions can be applied to construct hash tables with optimal variance of the bin size, as well as energyefficient Bloom filters.
Maximum Bipartite Matching Size And Application to Cuckoo Hashing ∗
"... Cuckoo hashing with a stash is a robust highperformance hashing scheme that can be used in many reallife applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stas ..."
Abstract
 Add to MetaCart
Cuckoo hashing with a stash is a robust highperformance hashing scheme that can be used in many reallife applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory overprovisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant leftside vertex degree d = 2. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the number of vertices goes to infinity. We also extend our analysis to cases where only part of the leftside vertices have a degree of 2; as well as to the case where the set of rightsize vertices is partitioned into two subsets, and each