Results 1 
7 of
7
A nonlinear time lower bound for boolean branching programs
 In Proc. of 40th FOCS
, 1999
"... Abstract: We give an exponential lower bound for the size of any lineartime Boolean branching program computing an explicitly given function. More precisely, we prove that for all positive integers k and for all sufficiently small ε> 0, if n is sufficiently large then there is no Boolean (or 2way) ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Abstract: We give an exponential lower bound for the size of any lineartime Boolean branching program computing an explicitly given function. More precisely, we prove that for all positive integers k and for all sufficiently small ε> 0, if n is sufficiently large then there is no Boolean (or 2way) branching program of size less than 2 εn which, for all inputs X ⊆ {0,1,...,n − 1}, computes in time kn the parity of the number of elements of the set of all pairs 〈x,y 〉 with the property x ∈ X, y ∈ X, x < y, x + y ∈ X. For the proof of this fact we show that if A = (ai, j) n i=0, j=0 is a random n by n matrix over the field with 2 elements with the condition that “A is constant on each minor diagonal,” then with high probability the rank of each δn by δn submatrix of A is at least cδlogδ  −2n, where c> 0 is an absolute constant and n is sufficiently large with respect to δ.
Lower bounds for high dimensional nearest neighbor search and related problems
, 1999
"... In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40 ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [38, 37, 40] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [43] to study data structure problems. We derive nontrivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search. 1
Pseudorandom Generators in Propositional Proof Complexity
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REP. NO.23
, 2000
"... We call a pseudorandom generator Gn : {0, 1}^n → {0, 1}^m hard for a propositional proof system P if P can not efficiently prove the (properly encoded) statement G(x1, ..., xn) ≠ b for any string b ∈ {0, 1}^m. We consider a variety of "combinatorial" pseudorandom generators inspired by ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
We call a pseudorandom generator Gn : {0, 1}^n → {0, 1}^m hard for a propositional proof system P if P can not efficiently prove the (properly encoded) statement G(x1, ..., xn) ≠ b for any string b ∈ {0, 1}^m. We consider a variety of "combinatorial" pseudorandom generators inspired by the NisanWigderson generator on the one hand, and by the construction of Tseitin tautologies on the other. We prove that under certain circumstances these generators are hard for such proof systems as Resolution, Polynomial Calculus and Polynomial Calculus with Resolution (PCR).
Determinism versus NonDeterminism for Linear Time RAMs with Memory Restrictions
 In Proc. of 31st STOC
, 1998
"... Our computational model is a random access machine with n read only input registers each containing c log n bits of information and a read and write memory. We measure the time by the number of accesses to the input registers. We show that for all k there is an epsilon > 0 so that if n is sufficient ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
Our computational model is a random access machine with n read only input registers each containing c log n bits of information and a read and write memory. We measure the time by the number of accesses to the input registers. We show that for all k there is an epsilon > 0 so that if n is sufficiently large then the elements distinctness problem cannot be solved in time kn with epsilon n bits of read and write memory, that is, there is no machine with this values of the parameters which decides whether there are two different input registers whose contents are identical. We also show that there is a simple decision problem that can be solved in constant time (actually in two steps) using nondeterministic computation, while there is no deterministic linear time algorithm with epsilon n log n bits read and write memory which solves the problem. More precisely if we allow kn time for some fixed constant k, then there is an epsilon > 0 so that the problem cannot be solved with epsilon n log n bits of read and write memory if n is sufficiently large. The decision problem is the following: "Find two different input registers, so that the Hamming distance of their contents is at most c log n".
SuperLinear TimeSpace Tradeoff Lower Bounds for Randomized Computation
, 2000
"... We prove the first timespace lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
We prove the first timespace lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, Ajt99b] in his timespace tradeoffs for deterministic RAM algorithms computing element distinctness and for Boolean branching programs computing a natural quadratic form. Ajtai's bounds were of the following form...
Matching Colored Points in the Plane: Some New Results
 Comput. Geom
, 2001
"... Let S be a set with n = w+b points in general position in the plane, w of them white, and b of them black. We consider the problem of computing G(S), a largest noncrossing matching of pairs of points of the same color, using straight line segments. We present two new algorithms which compute a larg ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Let S be a set with n = w+b points in general position in the plane, w of them white, and b of them black. We consider the problem of computing G(S), a largest noncrossing matching of pairs of points of the same color, using straight line segments. We present two new algorithms which compute a large matching, with an improved guarantee in the number of matched points. The first one runs in O(n 2 ) time and finds a matching of at least 85:71% of the points. The second algorithm runs in O(n log n) time and achieves a performance guarantee as close as we want to that of the first algorithm. On the other hand, we show that there exist confiurations of points such that any matching with the above properties matches fewer than 98:95% of the points. We further extend these results to point sets with a prescribed ratio of the sizes of the two color classes. In the end, we discuss the more general problem when the points are colored with any fixed number of colors.
Binary Decision Diagrams
"... Decision diagrams are a natural representation of finite functions. The obvious complexity measures are length and size which correspond to time and space of computations. Decision diagrams are the right model for considering space lower bounds and timespace tradeoffs. Due to the lack of powerful ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Decision diagrams are a natural representation of finite functions. The obvious complexity measures are length and size which correspond to time and space of computations. Decision diagrams are the right model for considering space lower bounds and timespace tradeoffs. Due to the lack of powerful lower bound techniques, various types of restricted decision diagrams are investigated. They lead to new lower bound techniques and some of them allow efficient algorithms for a list of operations on boolean functions. Indeed, restricted decision diagrams like ordered binary decision diagrams (OBDDs) are the most common data structure for boolean functions with many applications in verification, model checking, CAD tools, and graph problems. From a complexity theoretical point of view also randomized and nondeterministic decision diagrams are of interest. 1