Results 11 
17 of
17
Communicationspace tradeoffs for unrestricted protocols
 SIAM Journal on Computing
, 1994
"... This paper introduces communicating branching programs, and develops a general technique for demonstrating communicationspace tradeoffs for pairs of communicating branching programs. This technique is then used to prove communicationspace tradeoffs for any pair of communicating branching programs ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper introduces communicating branching programs, and develops a general technique for demonstrating communicationspace tradeoffs for pairs of communicating branching programs. This technique is then used to prove communicationspace tradeoffs for any pair of communicating branching programs that hashes according to a universal family of hash functions. Other tradeoffs follow from this result. As an example, any pair of communicating Boolean branching programs that computes matrixvector products over GF(2) requires communicationspace product Ω(n 2), provided the space used is o(n / log n). These are the first examples of communicationspace tradeoffs on a completely general model of communicating processes.
A TimeSpace Tradeoff for Boolean Matrix Multiplication
"... A timespace tradeoff is established in the branching program model for the problem of computing the product of two n x n matrices over the semiring ((0, l}, V, A). It is a.ssumed that ea.ch element of each nxn input matrix is chosen independently to be 1 with probability nll2 and to be 0 with prob ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
A timespace tradeoff is established in the branching program model for the problem of computing the product of two n x n matrices over the semiring ((0, l}, V, A). It is a.ssumed that ea.ch element of each nxn input matrix is chosen independently to be 1 with probability nll2 and to be 0 with probability 1 n1/2. Letting S and T denote expected space and time of a deterministic algorithm, the tradeoff is ST = R(n3.5) for T < cln2.5 and ST = R(n3) for T> where c1, c2> 0. The lower bounds are matched to within a logarithmic factor by upper bounds in the branching program model. Thus, the tradeoff possesses a sharp break a.t T = O(n2.5). These expected case lower bounds are also the best known lower bounds for the worst case.
TimeSpace Lower Bounds for Undirected and Directed STConnectivity on JAG
, 1993
"... Directed and undirected stconnectivity are important problems in computing. There are algorithms for the undirected case that use O (n) time and algorithms that use O (log n) space. The first result of this thesis proves that, in a very natural structured model, the JAG (Jumping Automata for Graph ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Directed and undirected stconnectivity are important problems in computing. There are algorithms for the undirected case that use O (n) time and algorithms that use O (log n) space. The first result of this thesis proves that, in a very natural structured model, the JAG (Jumping Automata for Graphs), these upper bounds are not simultaneously achievable. This uses new entropy techniques to prove tight bounds on a game involving a helper and a player that models a computation having precomputed information about the input stored in its bounded space. The second result proves that a JAG requires a timespace tradeoff of T \Theta S 1 2 2\Omega i mn 1 2 j to compute directed stconnectivity. The third result proves a timespace tradeoff of T \Theta S 1 3 2\Omega i m 2 3 n 2 3 j on a version of the...
ComparisonBased Time–Space Lower Bounds for Selection
"... We establish the first nontrivial lower bounds on timespace tradeoffs for the selection problem. We prove that any comparisonbased randomized algorithm for finding the median requires Ω(n log logS n) expected time in the RAM model (or more generally in the comparison branching program model), if we ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We establish the first nontrivial lower bounds on timespace tradeoffs for the selection problem. We prove that any comparisonbased randomized algorithm for finding the median requires Ω(n log logS n) expected time in the RAM model (or more generally in the comparison branching program model), if we have S bits of extra space besides the readonly input array. This bound is tight for all S ≫ log n, and remains true even if the array is given in a random order. Our result thus answers a 16yearold question of Munro and Raman, and also complements recent lower bounds that are restricted to sequential access, as in the multipass streaming model [Chakrabarti et al., SODA 2008]. We also prove that any comparisonbased, deterministic, multipass streaming algorithm for finding the median requires Ω(n log ∗ (n/s) + n log s n) worstcase time (in scanning plus comparisons), if we have s cells of space. This bound is also tight for all s ≫ log 2 n. We get deterministic lower bounds for I/Oefficient algorithms as well. All proofs in this paper involve “elementary ” techniques only. 1
A theory of clock synchronization (extended abstract
 In Proceedings of the ACM Symposium on Theory of Computing
, 1994
"... 1 ..."
Parallel String Matching Algorithms
, 1990
"... The string matching problem is one of the most studied problems in computer science. While it is very easily stated and many of the simple algorithms perform very well in practice, numerous works have been published on the subject and research is still very active. In this paper we survey recent ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The string matching problem is one of the most studied problems in computer science. While it is very easily stated and many of the simple algorithms perform very well in practice, numerous works have been published on the subject and research is still very active. In this paper we survey recent results on parallel algorithms for the string matching problem.
TimeSpace TradeOffs For Undirected STConnectivity on a JAG
"... The following is a second proof of (basically) the same undirected stconnectivity result using recursive flyswatters as given in my thesis and in STOC93 [Ed93a, EdPHD]. The input graph and the reduction techniques in the two proofs are similar. The main difference is that JAG result is reduced to ..."
Abstract
 Add to MetaCart
The following is a second proof of (basically) the same undirected stconnectivity result using recursive flyswatters as given in my thesis and in STOC93 [Ed93a, EdPHD]. The input graph and the reduction techniques in the two proofs are similar. The main difference is that JAG result is reduced to a different game. In this paper, the game consists of a pebble walking on a line. The movements of the pebble are directed by a player and a random input. The conjecture is that the player cannot get the pebble across the line much faster than that done by a random walk. Likely, however, this is hard to prove. What can be proven is that this game becomes equivalent to the game in the original paper, if the player who is directing the pebble always knows where in the line pebble is. Therefore, the lower bound for the original game applies to this new game. Hence, the JAG lower bound proved in this paper is the same as that proven before. Two advantages of this new proof are that it is a litt...