Results 1  10
of
16
Less hashing, same performance: Building a better bloom filter
 In Proc. the 14th Annual European Symposium on Algorithms (ESA 2006
, 2006
"... ABSTRACT: A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, on ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
(Show Context)
ABSTRACT: A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for
An incremental heap canonicalization algorithm
 in SPIN
, 2005
"... Abstract. The most expensive operation in explicit state model checking is the hash computation required to store the explored states in a hash table. One way to reduce this computation is to compute the hash incrementally by only processing those portions of the state that are modified in a transit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The most expensive operation in explicit state model checking is the hash computation required to store the explored states in a hash table. One way to reduce this computation is to compute the hash incrementally by only processing those portions of the state that are modified in a transition. This paper presents an incremental heap canonicalization algorithm that aids in such an incremental hash computation. Like existing heap canonicalization algorithms, the incremental algorithm reduces the state space explored by detecting heap symmetries. On the other hand, the algorithm ensures that for small changes in the heap the resulting canonical representations differ only by relatively small amounts. This reduces the amount of hash computation a model checker has to perform after every transition, resulting in significant speedup of state space exploration. This paper describes the algorithm and its implementation in two explicit state model checkers, CMC and Zing. 1
Fighting state space explosion: Review and evaluation
 In Proc. of Formal Methods for Industrial Critical Systems (FMICS’08
, 2008
"... Abstract. In order to apply formal methods in practice, the practitioner has to comprehend a vast amount of research literature and realistically evaluate practical merits of different approaches. In this paper we focus on explicit finite state model checking and study this area from practitioner’s ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In order to apply formal methods in practice, the practitioner has to comprehend a vast amount of research literature and realistically evaluate practical merits of different approaches. In this paper we focus on explicit finite state model checking and study this area from practitioner’s point of view. We provide a systematic overview of techniques for fighting state space explosion and we analyse trends in the research. We also report on our own experience with practical performance of techniques. Our main conclusion and recommendation for practitioner is the following: be critical to claims of dramatic improvement brought by a single sophisticated technique, rather use many different simple techniques and combine them. 1
Cache, Hash and SpaceEfficient Bloom Filters
"... A Bloom filter is a very compact data structure that supports approximate membership queries on a set, allowing false positives. We propose several new variants of Bloom filters and replacements with similar functionality. All of them have a better cacheefficiency and need less hash bits than regu ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
A Bloom filter is a very compact data structure that supports approximate membership queries on a set, allowing false positives. We propose several new variants of Bloom filters and replacements with similar functionality. All of them have a better cacheefficiency and need less hash bits than regular Bloom filters. Some use SIMD functionality, while the others provide an even better space efficiency. As a consequence, we get a more flexible tradeoff between false positive rate, spaceefficiency, cacheefficiency, hashefficiency, and computational effort. We analyze the efficiency of Bloom filters and the proposed replacements in detail, in terms of the false positive rate, the number of expected cachemisses, and the number of required hash bits. We also describe and experimentally evaluate the performance of highlytuned implementations. For many settings, our alternatives perform better than the methods proposed so far.
Per Flow Packet Sampling for HighSpeed Network Monitoring
"... Abstract—We present a perflow packet sampling method that enables the realtime classification of highspeed network traffic. Our method, based upon the partial sampling of each flow (i.e., performing sampling at only early stages in each flow’s lifetime), provides a sufficient reduction in total t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We present a perflow packet sampling method that enables the realtime classification of highspeed network traffic. Our method, based upon the partial sampling of each flow (i.e., performing sampling at only early stages in each flow’s lifetime), provides a sufficient reduction in total traffic (e.g., a factor of five in packets, a factor of ten in bytes) as to allow practical implementations at one Gigabit/s, and, using limited hardware assistance, ten Gigabit/s. I.
AyAlsoPlan: Bitstate Pruning for StateBased Planning on
 Massively Parallel Compute Clusters’, in IPC 2011 Deterministic Track
, 2011
"... Many planning systems operate by performing a heuristic forward search in the problem state space. In large problems that approach fails, exhausting a computer’s memory due to the burden of storing problem states. Moreover, it is an open question exactly how that approach should be parallelized to t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Many planning systems operate by performing a heuristic forward search in the problem state space. In large problems that approach fails, exhausting a computer’s memory due to the burden of storing problem states. Moreover, it is an open question exactly how that approach should be parallelized to take advantage of modern multipleprocessor computers and the proliferation of massively parallel compute clusters. This extended abstract proposes an answer to this second question, while also going some way to addressing the memory problems. We present AYALSOPLAN, our entry in the MultiCore Track of the 2011 International Planning Competition (IPC2011). Our approach is to run many independent and incomplete statebased searches in parallel. Our approach deliberately exploits hashing collisions to limit the set of states an individual search can encounter. Also, none of the parallel searches store all expanded states, each corresponding to a memory efficient statebased reachability procedure, albeit incomplete. As soon as a search determines reachability, the parallel processing ceases, and a singlecore computer can efficiently construct the plan. Because the 2011 IPC evaluation environment of the Sequential MultiCore Track is not a massively parallel computer, and moreover because it imposes a very limited timeout, we have limited expectations regarding how AYALSOPLAN might be ranked in that evaluation. Therefore, this extended abstract commits some space to presenting empirical data we collected when evaluating our approach on our local cluster, without any runtime restrictions – i.e., searches can only fail when memory is exhausted. It is in that setting that we demonstrate the positive characteristics of our approach. 1.
Peeling Arguments and Double Hashing
"... Abstract — The analysis of several algorithms and data structures can be reduced to the analysis of the following greedy “peeling ” process: start with a random hypergraph; find a vertex of degree at most k, and remove it and all of its adjacent hyperedges from the graph; repeat until there is no s ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract — The analysis of several algorithms and data structures can be reduced to the analysis of the following greedy “peeling ” process: start with a random hypergraph; find a vertex of degree at most k, and remove it and all of its adjacent hyperedges from the graph; repeat until there is no suitable vertex. This specific process finds the kcore of a hypergraph, and variations on this theme have proven useful in analyzing for example decoding from lowdensity paritycheck codes, several hashbased data structures such as cuckoo hashing, and algorithms for satisfiability of random formulae. This approach can be analyzed several ways, with two common approaches being via a corresponding branching process or a fluid limit family of differential equations. In this paper, we make note of an interesting aspect of these types of processes: the results are generally the same when the randomness is structured in the manner of double hashing. This phenomenon allows us to use less randomness and simplify the implementation for several hashbased data structures and algorithms. We explore this approach from both an empirical and theoretical perspective, examining theoretical justifications as well as simulation results for specific problems. I.
Y.: Privacypreserving spatiotemporal matching
 In: INFOCOM
, 2013
"... Abstract—The explosive growth of mobileconnected and locationaware devices makes it possible to have a new way of establishing trust relationships, which we coin as spatiotemporal matching. In particular, a mobile user could very easily maintain his spatiotemporal profile recording his continuous ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The explosive growth of mobileconnected and locationaware devices makes it possible to have a new way of establishing trust relationships, which we coin as spatiotemporal matching. In particular, a mobile user could very easily maintain his spatiotemporal profile recording his continuous whereabouts in time, and the level of his spatiotemporal profile matching that of the other user can be translated into the level of trust they two can have in each other. Since spatiotemporal profiles contain very sensitive personal information, privacypreserving spatiotemporal matching is needed to ensure that as little information as possible about the spatiotemporal profile of either matching participant is disclosed beyond the matching result. We propose a cryptographic solution based on Private Set Intersection Cardinality and a more efficient noncryptographic solution involving a novel use of the Bloom filter. We thoroughly analyze both solutions and compare their efficacy and efficiency via detailed simulation studies. I.
Enhanced Probabilistic Verification with 3Spin and 3Murphi
"... Abstract. 3Spin and 3Murphi are modified versions of the Spin model checker and the Murϕ verifier. Our modifications enhance the probabilistic algorithms and data structures for storing visited states, making them more effective and more usable for verifying huge transition systems. The tools also s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. 3Spin and 3Murphi are modified versions of the Spin model checker and the Murϕ verifier. Our modifications enhance the probabilistic algorithms and data structures for storing visited states, making them more effective and more usable for verifying huge transition systems. The tools also support a verification methodology designed to minimize time to finding errors, or to reaching desired certainty of errorfreedom. This methodology calls for bitstate hashing, hash compaction, and integrated analyses of both to provide feedback and advice to the user. 3Spin and 3Murphi are the only tools to offer this support, and do so with the most powerful and flexible currentlyavailable implementations of the underlying algorithms and data structures. 1
A Multihop Advertising Discovery and Delivering Protocol for Multi Administrative Domain MANET
"... AMobile Adhoc NETwork (MANET) isMulti Administrative Domain (MAD) if each network node belongs to an independent authority, that is each node owns its resources and there is no central authority owning all network nodes. One of the main obstructions in designing Service Advertising, Discovery and D ..."
Abstract
 Add to MetaCart
(Show Context)
AMobile Adhoc NETwork (MANET) isMulti Administrative Domain (MAD) if each network node belongs to an independent authority, that is each node owns its resources and there is no central authority owning all network nodes. One of the main obstructions in designing Service Advertising, Discovery and Delivery (SADD) protocol for MAD MANETs is the fact that, in an attempt to increase their own visibility, network nodes tend to flood the network with their advertisements. In this paper, we present a SADD protocol for MAD MANET, based on Bloom filters, that effectively prevents advertising floods due to such misbehaving nodes. ∗corresponding author 1 Our results with the ns2 simulator show that our SADD protocol is effective in counteracting advertising floods, it keeps low the collision rate as well as the energy consumption while ensuring that each peer receives all messages broadcasted by other peers.