Results 1  10
of
21
Sudden Emergence Of A Giant kCore In A Random Graph.
 J. Combinatorial Theory, Series B
, 1996
"... The k core of a graph is the largest subgraph with minimum degree at least k . For the ErdosR'enyi random graph G(n; m) on n vertices, with m edges, it is known that a giant 2core grows simultaneously with a giant component, that is when m is close to n=2 . We show that for k 3 , with high proba ..."
Abstract

Cited by 105 (8 self)
 Add to MetaCart
The k core of a graph is the largest subgraph with minimum degree at least k . For the ErdosR'enyi random graph G(n; m) on n vertices, with m edges, it is known that a giant 2core grows simultaneously with a giant component, that is when m is close to n=2 . We show that for k 3 , with high probability, a giant k core appears suddenly when m reaches c k n=2 ; here c k = min ?0 = k () and k () = PfPoisson() k \Gamma 1g . In particular, c 3 3:35 . We also demonstrate that, unlike the 2core, when a k core appears for the first time it is very likely to be giant, of size p k ( k )n . Here k is the minimum point of = k () and p k ( k ) = PfPoisson( k ) kg . For k = 3 , for instance, the newborn 3core contains about 0:27n vertices. Our proofs are based on the probabilistic analysis of an edge deletion algorithm that always finds a k core if the graph has one. 1991 Mathematics Subject Classification. Primary 05C80, 05C85, 60C05; Secondary 60F10, 60G42, 60J10.
State Compression in SPIN: Recursive Indexing And Compression Training Runs
 IN PROCEEDINGS OF THIRD INTERNATIONAL SPIN WORKSHOP
, 1997
"... The verification algorithm of SPIN is based on an explicit enumeration of a subset of the reachable statespace of a system that is obtained through the formalization of a correctness requirement as an automaton. This automaton restricts the statespace to precisely the subset that may contain ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
The verification algorithm of SPIN is based on an explicit enumeration of a subset of the reachable statespace of a system that is obtained through the formalization of a correctness requirement as an automaton. This automaton restricts the statespace to precisely the subset that may contain the counterexamples to the original correctness requirement, if they exist. This method of verification conforms to the method for automatatheoretic verification outlined in [VW86]. SPIN derives
Randomized language models via perfect hash functions
 In Proc. of ACL08: HLT
, 2008
"... We propose a succinct randomized language model which employs a perfect hash function to encode fingerprints of ngrams and their associated probabilities, backoff weights, or other parameters. The scheme can represent any standard ngram model and is easily combined with existing model reduction te ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We propose a succinct randomized language model which employs a perfect hash function to encode fingerprints of ngrams and their associated probabilities, backoff weights, or other parameters. The scheme can represent any standard ngram model and is easily combined with existing model reduction techniques such as entropypruning. We demonstrate the spacesavings of the scheme via machine translation experiments within a distributed language modeling framework. 1
Cores in random hypergraphs and boolean formulas
, 2003
"... We describe a technique for determining the thresholds for the appearance of cores in random structures. We use it to determine (i) the threshold for the appearance of a kcore in a random runiform hypergraph for all r; k * 2; r + k? 4, and (ii) the threshold for the pure literal rule to find a sa ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
We describe a technique for determining the thresholds for the appearance of cores in random structures. We use it to determine (i) the threshold for the appearance of a kcore in a random runiform hypergraph for all r; k * 2; r + k? 4, and (ii) the threshold for the pure literal rule to find a satisfying assignment for a random instance of rSAT, r * 3.
Monotone Minimal Perfect Hashing: Searching a Sorted Table with O(1) Accesses
"... A minimal perfect hash function maps a set S of n keys into the set { 0, 1,..., n − 1} bijectively. Classical results state that minimal perfect hashing is possible in constant time using a structure occupying space close to the lower bound of log e bits per element. Here we consider the problem of ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
A minimal perfect hash function maps a set S of n keys into the set { 0, 1,..., n − 1} bijectively. Classical results state that minimal perfect hashing is possible in constant time using a structure occupying space close to the lower bound of log e bits per element. Here we consider the problem of monotone minimal perfect hashing, in which the bijection is required to preserve the lexicographical ordering of the keys. A monotone minimal perfect hash function can be seen as a very weak form of index that provides ranking just on the set S (and answers randomly outside of S). Our goal is to minimise the description size of the hash function: we show that, for a set S of n elements out of a universe of 2 w elements, O(n log log w) bits are sufficient to hash monotonically with evaluation time O(log w). Alternatively, we can get space O(n log w) bits with O(1) query time. Both of these data structures improve a straightforward construction with O(n log w) space and O(log w) query time. As a consequence, it is possible to search a sorted table with O(1) accesses to the table (using additional O(n log log w) bits). Our results are based on a structure (of independent interest) that represents a trie in a very compact way, but admits errors. As a further application of the same structure, we show how to compute the predecessor (in the sorted order of S) of an arbitrary element, using O(1) accesses in expectation and an index of O(n log w) bits, improving the trivial result of O(nw) bits. This implies an efficient index for searching a blocked memory.
Encores on cores
 ELECTRONIC JOURNAL OF COMBINATORICS 13 (2006), RP 81
, 2005
"... We give a new derivation of the threshold of appearance of the kcore of a random graph. Our method uses a hybrid model obtained from a simple model of random graphs based on random functions, and the pairing or configuration model for random graphs with given degree sequence. Our approach also give ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
We give a new derivation of the threshold of appearance of the kcore of a random graph. Our method uses a hybrid model obtained from a simple model of random graphs based on random functions, and the pairing or configuration model for random graphs with given degree sequence. Our approach also gives a simple derivation of properties of the degree sequence of the kcore of a random graph, in particular its relation to multinomial and hence independent Poisson variables. The method is also applied to duniform hypergraphs.
Simple and spaceefficient minimal perfect hash functions
 In Proc. of the 10th Intl. Workshop on Data Structures and Algorithms
, 2007
"... Abstract. A perfect hash function (PHF) h: U → [0, m − 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n 2 /m bits, where n = S. In this paper we present new algorithms for ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Abstract. A perfect hash function (PHF) h: U → [0, m − 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n 2 /m bits, where n = S. In this paper we present new algorithms for construction and evaluation of PHFs of a given set (for m = n and m = 1.23n), with the following properties: 1. Evaluation of a PHF requires constant time. 2. The algorithms are simple to describe and implement, and run in linear time. 3. The amount of space needed to represent the PHFs is around a factor 2 from the information theoretical minimum. No previously known algorithm has these properties. To our knowledge, any algorithm in the literature with the third property either: – Requires exponential time for construction and evaluation, or – Uses nearoptimal space only asymptotically, for extremely large n.
Succinct Data Structures for Retrieval and Approximate Membership
"... Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the elements of a given set S ⊆ U, S  = n, but may have any value on elements outside S. All known methods (e. g. ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the elements of a given set S ⊆ U, S  = n, but may have any value on elements outside S. All known methods (e. g. those based on perfect hash functions), induce a space overhead of Θ(n) bits over the optimum, regardless of the evaluation time. We show that for any k, query time O(k) can be achieved using space that is within a factor 1 + e −k of optimal, asymptotically for large n. The time to construct the data structure is O(n), expected. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. A general reduction transfers the results on retrieval into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Thus we obtain space bounds arbitrarily close to the lower bound for this problem as well. The evaluation procedures of our data structures are extremely simple. For the results stated above we assume free access to fully random hash functions. This assumption can be justified using space o(n) to simulate full randomness on a RAM. 1