Results 1  10
of
34
Unbalanced expanders and randomness extractors from parvareshvardy codes
 In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity
, 2007
"... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous ..."
Abstract

Cited by 122 (7 self)
 Add to MetaCart
(Show Context)
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as nearoptimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)).
Lossless condensers, unbalanced expanders, and extractors
 In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing
, 2001
"... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..."
Abstract

Cited by 104 (21 self)
 Add to MetaCart
(Show Context)
Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough minentropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced
The Bloomier filter: An efficient data structure for static support lookup tables
 in Proc. Symposium on Discrete Algorithms
, 2004
"... “Oh boy, here is another David Nelson” ..."
(Show Context)
Pseudorandom Generators Hard for kDNF Resolution and Polynomial Calculus Resolution
, 2003
"... A pseudorandom generator G n : f0; 1g is hard for a propositional proof system P if (roughly speaking) P can not ef ciently prove the statement G n (x 1 ; : : : ; x n ) 6= b for any string b 2 . We present a function (m 2 ) generator which is hard for Res( log n); here Res(k) is the ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
(Show Context)
A pseudorandom generator G n : f0; 1g is hard for a propositional proof system P if (roughly speaking) P can not ef ciently prove the statement G n (x 1 ; : : : ; x n ) 6= b for any string b 2 . We present a function (m 2 ) generator which is hard for Res( log n); here Res(k) is the propositional proof system that extends Resolution by allowing kDNFs instead of clauses.
Efficient and Robust Compressed Sensing using Optimized Expander Graphs
"... Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this pape ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with the same number of 4 measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O ( n log ( )) n k We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds a ksparse signal which is close to the best kterm approximation of the original signal. I.
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays
, 2008
"... Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the socalled compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linearprogrammingbased methods, and can also recover signals with less sparsity.
Lowcomplexity approaches to SlepianWolf nearlossless distributed data compression
 IEEE TRANS. INFORM. THEORY
, 2006
"... This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of ra ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
(Show Context)
This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndromeformers and consider a linear programming relaxation to maximumlikelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “minsum ” iterative decoding algorithm is applied. This relaxation exhibits the MLcertificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”style lowdensity parity check codes (LDPCs) as syndromeformers admits a positive error exponent and therefore provably good performance.
Expander Graphs for Digital Stream Authentication and Robust Overlay Networks
 IN PROCEEDINGS OF THE 2002 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2002
"... We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We use expander graphs to provide efficient new constructions for two security applications: authentication of long digital streams over lossy networks and building scalable, robust overlay networks. Here is a summary of our contributions: (1) To authenticate long digital streams over lossy networks, we provide a construction with a provable lower bound on the ability to authenticate a packet  and that lower bound is independent of the size of the graph. To achieve this, we present an authentication expander graph with constant degree. (Previous work, such as [MS01], used authentication graphs but required graphs with degree linear in the number of vertices.) (2) To build efficient, robust, and scalable overlay networks, we provide a construction using undirected expander graphs with a provable lower bound on the ability of a broadcast message to successfully reach any receiver. This also gives us a new, more efficient solution to the decentralized certificate revocation problem [WLM00].
GraphConstrained Group Testing
, 2010
"... Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in net ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in network tomography, sensor networks and infection propagation we formulate group testing problems on graphs. Unlike conventional group testing problems each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper we associate a test with a random walk. In this context conventional group testing corresponds to the special case of a complete graph on n vertices. For interesting classes of graphs we arrive at a rather surprising result, namely, that the number of tests required to identify d defective items is substantially similar to that required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T (n) corresponds to the mixing time of the graph G, we show that with m = O(d 2 T 2 (n) log(n/d)) nonadaptive tests, one can identify the defective items. Consequently, for the ErdősRényi random graph G(n, p), as well as expander graphs with constant spectral gap, it follows that m = O(d 2 log 3 n) nonadaptive tests