Results 1  10
of
41
On MemoryBound Functions for Fighting Spam
 In Crypto
, 2002
"... In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across ma ..."
Abstract

Cited by 103 (2 self)
 Add to MetaCart
(Show Context)
In 1992, Dwork and Naor proposed that email messages be accompanied by easytocheck proofs of computational effort in order to discourage junk email, now known as spam. They proposed specific CPUbound functions for this purpose. Burrows suggested that, since memory access speeds vary across machines much less than do CPU speeds, memorybound functions may behave more equitably than CPUbound functions; this approach was first explored by Abadi, Burrows, Manasse, and Wobber [8].
Quantum Algorithms for Element Distinctness
 SIAM Journal of Computing
, 2001
"... We present several applications of quantum amplitude amplification to finding claws and collisions in ordered or unordered functions. Our algorithms generalize those of Brassard, Høyer, and Tapp, and imply an O(N 3/4 log N) quantum upper bound for the element distinctness problem in the comparison c ..."
Abstract

Cited by 75 (9 self)
 Add to MetaCart
(Show Context)
We present several applications of quantum amplitude amplification to finding claws and collisions in ordered or unordered functions. Our algorithms generalize those of Brassard, Høyer, and Tapp, and imply an O(N 3/4 log N) quantum upper bound for the element distinctness problem in the comparison complexity model. This contrasts with Θ(N log N) classical complexity. We also prove a lower bound of Ω ( √ N) comparisons for this problem and derive bounds for a number of related problems. 1
A nonlinear time lower bound for boolean branching programs
 In Proc. of 40th FOCS
, 1999
"... Abstract: We give an exponential lower bound for the size of any lineartime Boolean branching program computing an explicitly given function. More precisely, we prove that for all positive integers k and for all sufficiently small ε> 0, if n is sufficiently large then there is no Boolean (or 2w ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
Abstract: We give an exponential lower bound for the size of any lineartime Boolean branching program computing an explicitly given function. More precisely, we prove that for all positive integers k and for all sufficiently small ε> 0, if n is sufficiently large then there is no Boolean (or 2way) branching program of size less than 2 εn which, for all inputs X ⊆ {0,1,...,n − 1}, computes in time kn the parity of the number of elements of the set of all pairs 〈x,y 〉 with the property x ∈ X, y ∈ X, x < y, x + y ∈ X. For the proof of this fact we show that if A = (ai, j) n i=0, j=0 is a random n by n matrix over the field with 2 elements with the condition that “A is constant on each minor diagonal,” then with high probability the rank of each δn by δn submatrix of A is at least cδlogδ  −2n, where c> 0 is an absolute constant and n is sufficiently large with respect to δ.
Lower bounds for high dimensional nearest neighbor search and related problems
, 1999
"... In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40 ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
(Show Context)
In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [43] to study data structure problems. We derive nontrivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search. 1
Compactly Encoding Unstructured Inputs with Differential Compression
 JOURNAL OF THE ACM
, 2002
"... The subject of this article is differential compression, the algorithmic task of finding common strings between versions of data and using them to encode one version compactly by describing it as a set of changes from its companion. A main goal of this work is to present new differencing algorithms ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
The subject of this article is differential compression, the algorithmic task of finding common strings between versions of data and using them to encode one version compactly by describing it as a set of changes from its companion. A main goal of this work is to present new differencing algorithms that (i) operate at a fine granularity (the atomic unit of change), (ii) make no assumptions about the format or alignment of input data, and (iii) in practice use linear time, use constant space, and give good compression. We present new algorithms, which do not always compress optimally but use considerably less time or space than existing algorithms. One new algorithm runs in O(n) time and O(1) space in the worst case (where each unit of space contains n# bits), as compared to
TimeSpace Tradeoffs for Branching Programs
, 1999
"... We obtain the first nontrivial timespace tradeoff lower bound for functions f : {0, 1}^n → {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + ε)n, for some constant & ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
We obtain the first nontrivial timespace tradeoff lower bound for functions f : {0, 1}^n &rarr; {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + &epsilon;)n, for some constant &epsilon; > 0. We also give the first separation result between the syntactic and semantic readk models [BRS93] for k > 1 by showing that polynomialsize semantic readtwice branching programs can compute functions that require exponential size on any syntactic readk branching program. We also show...
TimeSpace Tradeoff Lower Bounds for Randomized Computation of Decision Problems
 In Proc. of 41st FOCS
, 2000
"... We prove the first timespace lower bound tradeoffs for randomized computation of decision problems. ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
We prove the first timespace lower bound tradeoffs for randomized computation of decision problems.
SuperLinear TimeSpace Tradeoff Lower Bounds for Randomized Computation
, 2000
"... We prove the first timespace lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
(Show Context)
We prove the first timespace lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, Ajt99b] in his timespace tradeoffs for deterministic RAM algorithms computing element distinctness and for Boolean branching programs computing a natural quadratic form. Ajtai's bounds were of the following form...
TimeSpace Tradeoffs, Multiparty Communication Complexity, and NearestNeighbor Problems
 In 34th Symp. on Theory of Computing (STOC’02
, 2002
"... We extend recent techniques for timespace tradeoff lower bounds using multiparty communication complexity ideas. Using these arguments, for inputs from large domains we prove larger tradeoff lower bounds than previously known for general branching programs, yielding time lower bounds of the form T ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
We extend recent techniques for timespace tradeoff lower bounds using multiparty communication complexity ideas. Using these arguments, for inputs from large domains we prove larger tradeoff lower bounds than previously known for general branching programs, yielding time lower bounds of the form T = n) when space S = n , up from T = n log n) for the best previous results. We also prove the first unrestricted separation of the power of general and oblivious branching programs by proving that 1GAP , which is trivial on general branching programs, has a timespace tradeoff of the form T = (n=S)) on oblivious Finally, using timespace tradeoffs for branching programs, we improve the lower bounds on query time of data structures for nearest neighbor problems in d dimensions from d= log n), proved in the cellprobe model [8, 5], to d) or log d= log log d) or even d log d) (depending on the metric space involved) in slightly less general but more reasonable data structure models.
The Minimum Distance of TurboLike Codes
"... Worstcase upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeataccumulate codes, repeatconvolute codes, and generalizations of these codes obtained by allowing nonlinear and largememory constituent codes. It is s ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Worstcase upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeataccumulate codes, repeatconvolute codes, and generalizations of these codes obtained by allowing nonlinear and largememory constituent codes. It is shown that parallelconcatenated Turbo codes and repeatconvolute codes with sublinear memory are asymptotically bad. It is also shown that depthtwo serially concatenated codes with constantmemory outer codes and sublinearmemory inner codes are asymptotically bad. Most of these upper bounds hold even when the convolutional encoders are replaced by general finitestate automata encoders. In contrast, it is proven that depththree serially concatenated codes obtained by concatenating a repetition code with two accumulator codes through random permutations can be asymptotically good.