Results 1 
2 of
2
AccessEfficient Balanced Bloom Filters
"... Bloom Filters should particularly suit network devices, because of their low theoretical memoryaccess rates. However, in practice, since memory is often divided into blocks and Bloom Filters hash elements into several arbitrary memory blocks, Bloom Filters actually need high memoryaccess rates. O ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Bloom Filters should particularly suit network devices, because of their low theoretical memoryaccess rates. However, in practice, since memory is often divided into blocks and Bloom Filters hash elements into several arbitrary memory blocks, Bloom Filters actually need high memoryaccess rates. On the other hand, hashing all Bloom Filter elements into a single memory block to solve this problem also yields high false positive rates. In this paper, we propose to implement loadbalancing schemes for the choice of the memory block, along with an optional overflow list, resulting in improved false positive rates while keeping a high memoryaccess efficiency. To study this problem, we define, analyze and solve a fundamental accessconstrained balancing problem, where incoming elements need to be optimally balanced across resources while satisfying average and instantaneous constraints on the number of memory accesses associated with checking the current load of the resources. We then build on this problem to suggest a new accessefficient Bloom Filter scheme, called the Balanced Bloom Filter. Finally, we show that this scheme can reduce the false positive rate by up to two orders of magnitude, with a worstcase cost of up to 3 memory accesses for each element and an overflow list size of 0.5 % of the elements.
Transactions on Parallel and Distributed Systems 1 1 2 3 4 5 6 7 8
"... Abstract—Priority queues are essential building blocks for implementing advanced perflow service disciplines and hierarchical qualityofservice at highspeed network links. Scalable priority queue implementation requires solutions to two fundamental problems. The first is to sort queue elements in ..."
Abstract
 Add to MetaCart
Abstract—Priority queues are essential building blocks for implementing advanced perflow service disciplines and hierarchical qualityofservice at highspeed network links. Scalable priority queue implementation requires solutions to two fundamental problems. The first is to sort queue elements in realtime at ever increasing line speeds (e.g., at OC768 rates). The second is to store a huge number of packets (e.g., millions of packets). In this paper, we propose novel solutions by decomposing the problem into two parts, a succinct priority index in SRAM that can efficiently maintain a realtime sorting of priorities, coupled with a DRAMbased implementation of large packet buffers. In particular, we propose three related novel succinct priority index data structures for implementing highspeed priority indexes: a PriorityIndex (PI), a CountingPriorityIndex (CPI), and a pipelined CountingPriorityIndex (pCPI). We show that all three structures can be very compactly implemented in SRAM using only Θ(U) space, where U is the size of the universe required to implement the priority keys (timestamps). We also show that our proposed priority index structures can be implemented very efficiently as well by leveraging hardwareoptimized instructions that are readily available in modern 64bit processors. The operations on the PI and CPI structures take Θ(logW U) time complexity, where W is the processor wordlength (i.e., W = 64). Alternatively, operations on the pCPI structure take amortized constant time with only Θ(logW U) pipeline stages (e.g., only 4 pipeline stages for U =16million). Finally, we show the application of our proposed priority index structures for the scalable management of large packet buffers at line speeds. The pCPI structure can be implemented efficiently in highperformance network processing applications such as advanced perflow scheduling with qualityofservice guarantee.