Results 1  10
of
351
TimeSpace TradeOffs for Longest Common Extensions
 In Proc. 23rd CPM, LNCS
, 2012
"... We revisit the longest common extension (LCE) problem, that is, preprocess a string T into a compact data structure that supports fast LCE queries. An LCE query takes a pair (i, j) of indices in T and returns the length of the longest common prefix of the suffixes of T starting at positions i and j. ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
. We study the timespace tradeoffs for the problem, that is, the space used for the data structure vs. the worstcase time for answering an LCE query. Let n be the length of T. Given a parameter τ, 1 ≤ τ ≤ n, we show how to achieve either O(n/ τ) space and O(τ) query time, or O(n/τ) space and O(τ log
Localitysensitive hashing scheme based on pstable distributions
 In SCG ’04: Proceedings of the twentieth annual symposium on Computational geometry
, 2004
"... inÇÐÓ�Ò We present a novel LocalitySensitive Hashing scheme for the Approximate Nearest Neighbor Problem underÐÔnorm, based onÔstable distributions. Our scheme improves the running time of the earlier algorithm for the case of theÐnorm. It also yields the first known provably efficient approximate ..."
Abstract

Cited by 521 (8 self)
 Add to MetaCart
NN algorithm for the caseÔ�. We also show that the algorithm finds the exact near neigbhor time for data satisfying certain “bounded growth ” condition. Unlike earlier schemes, our LSH scheme works directly on points in the Euclidean space without embeddings. Consequently, the resulting query time
TimeSpace TradeOffs for Predecessor Search (Extended Abstract)
, 2006
"... We develop a new technique for proving cellprobe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication compl ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
complexity barrier. In addition, our bounds give the first separation between polynomial and near linear space. Such a separation is inherently impossible by communication complexity. Using our lower bound technique and new upper bound constructions, we obtain tight bounds for searching predecessors among a
Optimal TimeSpace TradeOffs for NonComparisonBased Sorting ∗
"... We study the problem of sorting n integers of w bits on a unitcost RAM with word size w, and in particular consider the timespace tradeoff (product of time and space in bits) for this problem. For comparisonbased algorithms, the timespace complexity is known to be Θ(n 2). A result of Beame show ..."
Abstract
 Add to MetaCart
We study the problem of sorting n integers of w bits on a unitcost RAM with word size w, and in particular consider the timespace tradeoff (product of time and space in bits) for this problem. For comparisonbased algorithms, the timespace complexity is known to be Θ(n 2). A result of Beame
Lower bounds on near neighbor search via metric expansion
 CoRR
"... In this paper we show how the complexity of performing nearest neighbor (NNS) search on a metric space is related to the expansion of the metric space. Given a metric space we look at the graph obtained by connecting every pair of points within a certain distance r. We then look at various notions o ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
, the bound drops exponentially in t. We show a much stronger (tight) timespace tradeoff for the class of dynamic low contention data structures. These are data structures that supports updates in the data set and that do not look up any single cell too often. 1 1
A geometric approach to lower bounds for approximate nearneighbor search and partial match
 In Proc. 49th IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... This work investigates a geometric approach to proving cell probe lower bounds for data structure problems. We consider the approximate nearest neighbor search problem on the Boolean hypercube ({0, 1} d, ‖ · ‖1) with d = Θ(log n). We show that any (randomized) data structure for the problem that a ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
bound holds for the near neighbor problem, where the algorithm knows in advance a good approximation to the distance to the nearest neighbor. Additionally, it is an average case lower bound for the natural distribution for the problem. Our approach also gives the same bound for (2 − 1)approximation
NearOptimal Hashing Algorithms for Approximate Nearest Neighbor in HighDimensions*
"... Abstract We present an algorithm for the capproximate nearest neighbor problem in a ddimensional Euclidean space,achieving query time of O(dn1/c 2+o(1)) and space O(dn + n1+1/c 2+o(1)). This almost matches the lower bound for hashingbased algorithm recently obtained in [27]. We alsoobtain a spac ..."
Abstract
 Add to MetaCart
Abstract We present an algorithm for the capproximate nearest neighbor problem in a ddimensional Euclidean space,achieving query time of O(dn1/c 2+o(1)) and space O(dn + n1+1/c 2+o(1)). This almost matches the lower bound for hashingbased algorithm recently obtained in [27]. We alsoobtain a
Higher lower bounds for nearneighbor and further rich problems
 in Proc. 47th IEEE Symposium on Foundations of Computer Science (FOCS
"... We convert cellprobe lower bounds for polynomial space into stronger lower bounds for nearlinear space. Our technique applies to any lower bound proved through the richness method. For example, it applies to partial match, and to nearneighbor problems, either for randomized exact search, or for d ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
We convert cellprobe lower bounds for polynomial space into stronger lower bounds for nearlinear space. Our technique applies to any lower bound proved through the richness method. For example, it applies to partial match, and to nearneighbor problems, either for randomized exact search
A Lower Bound on the Complexity of Approximate NearestNeighbor Searching on the Hamming Cube
 In Proc. 31th Annual ACM Symposium on Theory of Computing (STOC’99
, 1999
"... We consider the nearestneighbor problem over the dcube: given a collection of points in {0, 1} d , find the one nearest to a query point (in the L 1 sense). We establish a lower bound of###90 log d/ log log log d) on the worstcase query time. This result holds in the cell probe model with ( ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We consider the nearestneighbor problem over the dcube: given a collection of points in {0, 1} d , find the one nearest to a query point (in the L 1 sense). We establish a lower bound of###90 log d/ log log log d) on the worstcase query time. This result holds in the cell probe model
Nearest Neighbor Search in Lower Dimensional Flats
"... In order to improve efficiency in Approximate Nearest Neighbor (ANN) search, we exploit the structure of the input data by considering points that are distributed almost uniformly on flats of varying, fixed dimension. These flats are distributed uniformly within a bounding sphere of radius 1. We ..."
Abstract
 Add to MetaCart
employ an existing mapping that transforms Nearest Flat search to Nearest point search. Our algorithm, using linear space in the number of points, returns an (1 + )approximate neighbor in expected time O(c logm), where m is the number of flats, c = O∗((λ2)−d 2), d is the ambient space dimension, λ
Results 1  10
of
351