Results 1  10
of
33
Counting Distinct Elements in a Data Stream
, 2002
"... We present three algorithms to count the number of distinct elements in a data stream to within a factor of 1 ± epsilon. Our algorithms improve upon known algorithms for this problem, and offer a spectrum of time/space tradeoffs. ..."
Abstract

Cited by 194 (4 self)
 Add to MetaCart
We present three algorithms to count the number of distinct elements in a data stream to within a factor of 1 &plusmn; epsilon. Our algorithms improve upon known algorithms for this problem, and offer a spectrum of time/space tradeoffs.
Tractability of Parameterized Completion Problems on Chordal, Strongly Chordal and Proper Interval Graphs
, 1994
"... We study the parameterized complexity of three NPhard graph completion problems. The MINIMUM FILLIN problem is to decide if a graph can be triangulated by adding at most k edges. We develop O(c m) and O(k mn + f(k)) algorithms for this problem on a graph with n vertices and m edges. Here f(k ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
We study the parameterized complexity of three NPhard graph completion problems. The MINIMUM FILLIN problem is to decide if a graph can be triangulated by adding at most k edges. We develop O(c m) and O(k mn + f(k)) algorithms for this problem on a graph with n vertices and m edges. Here f(k) is exponential in k and the constants hidden by the bigO notation are small and do not depend on k. In particular, this implies that the problem is fixedparameter tractable (FPT). The PROPER
An experimental analysis of selfadjusting computation
 In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2006
"... Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a r ..."
Abstract

Cited by 51 (25 self)
 Add to MetaCart
(Show Context)
Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a reasonably broad range of applications. In this article, we describe algorithms and implementation techniques to realize selfadjusting computation and present an experimental evaluation of the proposed approach on a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms. The results of the experiments show that the approach is effective in practice, often offering orders of magnitude speedup from recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.
SelfAdjusting Computation
 In ACM SIGPLAN Workshop on ML
, 2005
"... From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of change ..."
Abstract

Cited by 49 (19 self)
 Add to MetaCart
(Show Context)
From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of changepropagation, we present an analysis technique, called trace stability, and apply it to a number of applications.
Cryptography In the Bounded QuantumStorage Model
 IN 46TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS
, 2005
"... We initiate the study of twoparty cryptographic primitives with unconditional security, assuming that the adversary’s quantum memory is of bounded size. We show that oblivious transfer and bit commitment can be implemented in this model using protocols where honest parties need no quantum memory, w ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
(Show Context)
We initiate the study of twoparty cryptographic primitives with unconditional security, assuming that the adversary’s quantum memory is of bounded size. We show that oblivious transfer and bit commitment can be implemented in this model using protocols where honest parties need no quantum memory, whereas an adversarial player needs quantum memory of size at least n/2 in order to break the protocol, where n is the number of qubits transmitted. This is in sharp contrast to the classical boundedmemory model, where we can only tolerate adversaries with memory of size quadratic in honest players’ memory size. Our protocols are efficient, noninteractive and can be implemented using today’s technology. On the technical side, a new entropic uncertainty relation involving minentropy is established.
A tight highorder entropic quantum uncertainty relation with applications
, 2007
"... We derive a new entropic quantum uncertainty relation involving minentropy. The relation is tight and can be applied in various quantumcryptographic settings. Protocols for quantum 1outof2 Oblivious Transfer and quantum Bit Commitment are presented and the uncertainty relation is used to prove ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
(Show Context)
We derive a new entropic quantum uncertainty relation involving minentropy. The relation is tight and can be applied in various quantumcryptographic settings. Protocols for quantum 1outof2 Oblivious Transfer and quantum Bit Commitment are presented and the uncertainty relation is used to prove the security of these protocols in the boundedquantumstorage model according to new strong security definitions. As another application, we consider the realistic setting of Quantum Key Distribution (QKD) against quantummemorybounded eavesdroppers. The uncertainty relation allows to prove the security of QKD protocols in this setting while tolerating considerably higher error rates compared to the standard model with unbounded adversaries. For instance, for the sixstate protocol with oneway communication, a bitflip error rate of up to 17 % can be tolerated (compared to 13 % in the standard model). Our uncertainty relation also yields a lower bound on the minentropy key uncertainty against knownplaintext attacks when quantum ciphers are composed. Previously, the key uncertainty of these ciphers was only known with respect to Shannon entropy.
Improved security analyses for CBC MACs
 In Advances in Cryptology Crypto 2005, LNCS 3621
, 2005
"... Abstract We present an improved bound on the advantage of any qquery adversary at distinguishingbetween the CBC MAC over a random nbit permutation and a random function outputting nbits. The result assumes that no message queried is a prefix of any other, as is the case when all messages to be MAC ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
Abstract We present an improved bound on the advantage of any qquery adversary at distinguishingbetween the CBC MAC over a random nbit permutation and a random function outputting nbits. The result assumes that no message queried is a prefix of any other, as is the case when all messages to be MACed have the same length. We go on to give an improved analysis ofthe encrypted CBC MAC, where there is no restriction on queried messages. Letting
Succinct Data Structures for Retrieval and Approximate Membership
"... Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the elements of a given set S ⊆ U, S  = n, but may have any value on elements outside S. All known methods (e. g. ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Abstract. The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U → {0, 1} r that has specified values on the elements of a given set S ⊆ U, S  = n, but may have any value on elements outside S. All known methods (e. g. those based on perfect hash functions), induce a space overhead of Θ(n) bits over the optimum, regardless of the evaluation time. We show that for any k, query time O(k) can be achieved using space that is within a factor 1 + e −k of optimal, asymptotically for large n. The time to construct the data structure is O(n), expected. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. A general reduction transfers the results on retrieval into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Thus we obtain space bounds arbitrarily close to the lower bound for this problem as well. The evaluation procedures of our data structures are extremely simple. For the results stated above we assume free access to fully random hash functions. This assumption can be justified using space o(n) to simulate full randomness on a RAM. 1
Bounds on the OBDDSize of Integer Multiplication via Universal Hashing
, 2005
"... Bryant [5] has shown that any OBDD for the function MULn−1,n, i.e. the middle bit of the nbit multiplication, requires at least 2 n/8 nodes. In this paper a stronger lower bound of essentially 2 n/2 /61 is proven by a new technique, using a universal family of hash functions. As a consequence, one ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Bryant [5] has shown that any OBDD for the function MULn−1,n, i.e. the middle bit of the nbit multiplication, requires at least 2 n/8 nodes. In this paper a stronger lower bound of essentially 2 n/2 /61 is proven by a new technique, using a universal family of hash functions. As a consequence, one cannot hope anymore to verify e.g. 128bit multiplication circuits using OBDDtechniques because the representation of the middle bit of such a multiplier requires more than 3 · 10 17 OBDDnodes. Further, a first nontrivial upper bound of 7/3 · 2 4n/3 for the OBDDsize of MULn−1,n is provided.