Results 1  10
of
22
Lossless condensers, unbalanced expanders, and extractors
 In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing
, 2001
"... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..."
Abstract

Cited by 89 (20 self)
 Add to MetaCart
Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough minentropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 77 (7 self)
 Add to MetaCart
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
The Bloomier filter: An efficient data structure for static support lookup tables
 in Proc. Symposium on Discrete Algorithms
, 2004
"... “Oh boy, here is another David Nelson” ..."
Pseudorandom Generators Hard for kDNF Resolution and Polynomial Calculus Resolution
, 2003
"... A pseudorandom generator G n : f0; 1g is hard for a propositional proof system P if (roughly speaking) P can not ef ciently prove the statement G n (x 1 ; : : : ; x n ) 6= b for any string b 2 . We present a function (m 2 ) generator which is hard for Res( log n); here Res(k) is the ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
A pseudorandom generator G n : f0; 1g is hard for a propositional proof system P if (roughly speaking) P can not ef ciently prove the statement G n (x 1 ; : : : ; x n ) 6= b for any string b 2 . We present a function (m 2 ) generator which is hard for Res( log n); here Res(k) is the propositional proof system that extends Resolution by allowing kDNFs instead of clauses.
Lowcomplexity approaches to SlepianWolf nearlossless distributed data compression
 IEEE TRANS. INFORM. THEORY
, 2006
"... This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of ra ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndromeformers and consider a linear programming relaxation to maximumlikelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “minsum ” iterative decoding algorithm is applied. This relaxation exhibits the MLcertificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”style lowdensity parity check codes (LDPCs) as syndromeformers admits a positive error exponent and therefore provably good performance.
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays
, 2008
"... Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the socalled compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linearprogrammingbased methods, and can also recover signals with less sparsity.
Expander Graphs for Digital Stream Authentication and Robust Overlay Networks
 IN PROCEEDINGS OF THE 2002 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2002
"... We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks, we provide a construction with a provable lower bound on the ability to authenticate a packet  and that lower bound is independent of the size of the graph. To achieve this, we present an authentication expander graph with constant degree. (Previous work, such as [MS01], used authentication graphs but required graphs with degree linear in the number of vertices.) (2) To build efficient, robust, and scalable overlay networks, we provide a construction using undirected expander graphs with a provable lower bound on the ability of a broadcast message to successfully reach any receiver. This also gives us a new, more efficient solution to the decentralized certificate revocation problem [WLM00].
Quantum expanders and the quantum entropy difference problem
, 2007
"... Classical expanders and extractors have numerous applications in computer science. However, it seems these classical objects have no meaningful quantum generalization. This is because it is easy to generate entropy in quantum computation simply by tracing out registers. In this paper we define quant ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Classical expanders and extractors have numerous applications in computer science. However, it seems these classical objects have no meaningful quantum generalization. This is because it is easy to generate entropy in quantum computation simply by tracing out registers. In this paper we define quantum expanders and extractors in a natural way. We show that this definition is exactly what is needed for showing that QED, the quantum analogue of ED (the entropy difference problem) is QSZKcomplete. We also show that quantum expanders exist and with very good parameters in the high minentropy regime. The first construction is derived from the work of Ambainis and Smith and is based on expander graphs that are based on Cayley graphs of Abelian groups. The drawback of this construction is that it uses logarithmic seed length (yet, this already suffices for showing that QED is QSZKcomplete). We also show a quantum analogue of the Lubotzky, Philips and Sarnak construction of Ramanujan expanders from Cayley graphs of PGL(2, q). Our construction is a sequence of two steps on the Cayley graph with a basis change in between steps. We believe this quantum analogue of classical Ramanujan expanders is of independent interest.
On Delsarte’s linear programming bounds for binary codes
 Proceedings of FOCS 46
"... Abstract We prove two results about the value of Delsarte's linear program for binary codes.Our main result is a new lower bound on the value of the program, which, in particular, is nearly tight for low rate codes.We also give an easy proof of an upper bound, which coincides with the best known bou ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract We prove two results about the value of Delsarte's linear program for binary codes.Our main result is a new lower bound on the value of the program, which, in particular, is nearly tight for low rate codes.We also give an easy proof of an upper bound, which coincides with the best known bound for a wide range of parameters. 0