Results 1  10
of
53
Exponential lower bound for 2query locally decodable codes via a quantum argument
 Journal of Computer and System Sciences
, 2003
"... Abstract A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 ..."
Abstract

Cited by 139 (15 self)
 Add to MetaCart
(Show Context)
Abstract A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 \Omega (n). Previously this was known only for linear codes (Goldreich et al. 02). The
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 127 (7 self)
 Add to MetaCart
(Show Context)
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Lossless condensers, unbalanced expanders, and extractors
 In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing
, 2001
"... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..."
Abstract

Cited by 104 (21 self)
 Add to MetaCart
(Show Context)
Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough minentropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced
The Bloomier filter: An efficient data structure for static support lookup tables
 in Proc. Symposium on Discrete Algorithms
, 2004
"... “Oh boy, here is another David Nelson” ..."
(Show Context)
Extractor Codes
, 2001
"... We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties ar ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. Wegive an application of extractor codes to extracting many hardcore bits from a oneway function, using few auxiliary random bits. Finally,weshow that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.
The Cell Probe Complexity of Succinct Data Structures
 In Automata, Languages and Programming, 30th International Colloquium (ICALP 2003
, 2003
"... We show lower bounds in the cell probe model for the redundancy/query time tradeoff of solutions to static data structure problems. ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We show lower bounds in the cell probe model for the redundancy/query time tradeoff of solutions to static data structure problems.
Cell probe complexity  a survey
 In 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 1999. Advances in Data Structures Workshop
"... The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1 ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1
A Linear Lower Bound on Index Size for Text Retrieval
 Journal of Algorithms
, 2001
"... Abstract Most informationretrieval systems preprocess the data to produce an auxiliary index structure. Empirically, it has been observed that there is a tradeoff between query response time and the size of the index. When indexing a large corpus, such as the web, the size of the index is an import ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Abstract Most informationretrieval systems preprocess the data to produce an auxiliary index structure. Empirically, it has been observed that there is a tradeoff between query response time and the size of the index. When indexing a large corpus, such as the web, the size of the index is an important consideration. In this case it would be ideal to produce an index that is substantially smaller than the text. In this work we prove a linear lower bound on the size of any index that reports the location (if any) of a substring in the text in time proportional to the length of the pattern. In other words, an index supporting lineartime substring searches requires about as much space as the original text. Here &quot;time &quot; is measured in the number of bit probes to the text; an arbitrary amount of computation may be done on an arbitrary amount of the index. Our lower bound applies to inverted word indices as well. 1 Introduction Text retrieval is crucial in such contexts as searching the web, news, and medical databases. The most basic problem, used as a subroutine in most search engines, is to search for a given substring (keyword or phrase) in a corpus of text. Because the text database changes infrequently relative to the frequency and abundancy of queries, fundamental to any search technique is a preprocessing step to prepare an index for fast searches.