Results 11 
19 of
19
Element Distinctness, Frequency Moments, and Sliding Windows
"... Abstract — We derive new timespace tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. In particular, we develop a randomized algorithm for the ele ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract — We derive new timespace tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. In particular, we develop a randomized algorithm for the element distinctness problem whose time T and space S satisfy T ∈ Õ(n3/2/S1/2), smaller than previous lower bounds for comparisonbased algorithms, showing that element distinctness is strictly easier than sorting for randomized branching programs. This algorithm is based on a new time and spaceefficient algorithm for finding all collisions of a function f from a finite set to itself that are reachable by iterating f from a given set of starting points. We further show that our element distinctness algorithm can be extended at only a polylogarithmic factor cost to solve the element distinctness problem over sliding windows [18], where the task is to take an input of length 2n − 1 and produce an output for each window of length n, giving n outputs in total. In contrast, we show a timespace tradeoff lower bound of T ∈ Ω(n2/S) for randomized multiway branching programs, and hence standard RAM and wordRAM models, to compute the number of distinct elements, F0, over sliding windows. The same lower bound holds for computing the loworder bit of F0 and computing any frequency moment Fk for k 6 = 1. This shows that frequency moments Fk 6 = 1 and even the decision problem F0 mod 2 are strictly harder than element distinctness. We provide even stronger separations on average for inputs from [n]. We complement this lower bound with a T ∈ Õ(n2/S) comparisonbased deterministic RAM algorithm for exactly computing Fk over sliding windows, nearly matching both our general lower bound for the slidingwindow version and the comparisonbased lower bounds for a single instance of the problem. We also consider the computations of order statistics over sliding windows.
TimeSpace TradeOffs For Undirected STConnectivity on a JAG
"... The following is a second proof of (basically) the same undirected stconnectivity result using recursive flyswatters as given in my thesis and in STOC93 [Ed93a, EdPHD]. The input graph and the reduction techniques in the two proofs are similar. The main difference is that JAG result is reduced to ..."
Abstract
 Add to MetaCart
The following is a second proof of (basically) the same undirected stconnectivity result using recursive flyswatters as given in my thesis and in STOC93 [Ed93a, EdPHD]. The input graph and the reduction techniques in the two proofs are similar. The main difference is that JAG result is reduced to a different game. In this paper, the game consists of a pebble walking on a line. The movements of the pebble are directed by a player and a random input. The conjecture is that the player cannot get the pebble across the line much faster than that done by a random walk. Likely, however, this is hard to prove. What can be proven is that this game becomes equivalent to the game in the original paper, if the player who is directing the pebble always knows where in the line pebble is. Therefore, the lower bound for the original game applies to this new game. Hence, the JAG lower bound proved in this paper is the same as that proven before. Two advantages of this new proof are that it is a litt...
TimeSpace Tradeoffs for Set Operations
, 1994
"... This paper considers timespace tradeoffs for various set operations. Denoting the time requirement of an algorithm by T and its space requirement by S, it is shown that TS =\Omega (n 2 ) for set complementation and TS =\Omega \Gamma n 3=2 \Delta for set intersection, in the Rway branch ..."
Abstract
 Add to MetaCart
This paper considers timespace tradeoffs for various set operations. Denoting the time requirement of an algorithm by T and its space requirement by S, it is shown that TS =\Omega (n 2 ) for set complementation and TS =\Omega \Gamma n 3=2 \Delta for set intersection, in the Rway branching program model. In the more restricted model of comparison branching programs, the paper provides two additional types of results. A tradeoff of TS =\Omega \Gamma n 2\Gammaffl(n) \Delta , derived from Yao's lower bound for element distinctness, is shown for set disjointness, set union and set intersection (where ffl(n) = O \Gamma (log n) \Gamma1=2 \Delta ). A bound of TS =\Omega \Gamma n 3=2 \Delta is shown for deciding set equality and set inclusion. Finally, a classification of set operations is presented, and it is shown that all problems of a large naturally arising class are as hard as the problems bounded in this paper.
CHOICEMEMORY TRADEOFF IN ALLOCATIONS
"... In the classical ballsandbins paradigm, where n balls are placed independently and uniformly in n bins, typically the number of bins with at least two balls in them is Θ(n) and the maximum number log n of balls in a bin is Θ (). It is well known that when each round log log n offers k independen ..."
Abstract
 Add to MetaCart
In the classical ballsandbins paradigm, where n balls are placed independently and uniformly in n bins, typically the number of bins with at least two balls in them is Θ(n) and the maximum number log n of balls in a bin is Θ (). It is well known that when each round log log n offers k independent uniform options for bins, it is possible to typically achieve a constant maximal load if and only if k = Ω(log n). Moreover, it is possible whp to avoid any collisions between n/2 balls if k> log2 n. In this work, we extend this into the setting where only m bits of memory are available. We establish a tradeoff between the number of choices k and the memory m, dictated by the quantity km/n. Roughly put, we show that for km ≫ n one can achieve a constant maximal load, while for km ≪ n no substantial improvement can be gained over the case k = 1 (i.e., a random allocation). For any k = Ω(log n) and m = Ω(log 2 n), one can achieve a constant load whp if km = Ω(n), yet the load is unbounded if km = o(n). Similarly, if km> Cn then n/2 balls can be allocated without any collisions whp, whereas for km < εn there are typically Ω(n) collisions. Furthermore, we show that the load is whp at least log(n/m) log k+log log(n/m). In particular, for k ≤ polylog(n), if m = n 1−δ the optimal maximal load log n is Θ ( ) (the same as in the case k = 1), while m = 2n suffices log log n to ensure a constant load. Finally, we analyze nonadaptive allocation algorithms and give tight upper and lower bounds for their performance.
on Time–Space Tradeoffs for Branching
, 1999
"... We obtain the first nontrivial time–space tradeoff lower bound for functions f: {0, 1}nQ {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1+e) n, for some constant e> 0. We also give the fir ..."
Abstract
 Add to MetaCart
We obtain the first nontrivial time–space tradeoff lower bound for functions f: {0, 1}nQ {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1+e) n, for some constant e> 0. We also give the first separation result between the syntactic and semantic readk models (A. Borodin et al., Comput. Complexity 3 (1993), 1–18) for k> 1 by showing that polynomialsize semantic readtwice branching programs can compute functions that require exponential size on any semantic readk branching program. We also show a time–space tradeoff result on the more general Rway branching program model (Borodin et al., 1993): for any k, we give a function that requires exponential size to be computed by length kn qway branching programs, for some q=q(k). This result gives a similar tradeoff for RAMs, and thus provides the first nontrivial time–space tradedoff for decision problems in this model. © 2001 Elsevier Science (USA) 1.
Abstract
, 2001
"... We prove new lower bounds for bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz [30] to the quantum case. Applying this ..."
Abstract
 Add to MetaCart
(Show Context)
We prove new lower bounds for bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz [30] to the quantum case. Applying this method we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other Fourier based lower bound methods, notably showing that ¯s(f) / log n, for the average sensitivity ¯s(f) of a function f, yields a lower bound on the bounded error quantum communication complexity of f(x ∧ y ⊕ z), where x is a Boolean word held by Alice and y, z are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are O(log n). 1