Results 1  10
of
41
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 77 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays
, 2008
"... Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the socalled compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linearprogrammingbased methods, and can also recover signals with less sparsity.
Efficient and Robust Compressed Sensing using Optimized Expander Graphs
"... Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this pape ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with the same number of 4 measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O ( n log ( )) n k We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds a ksparse signal which is close to the best kterm approximation of the original signal. I.
Calderbank R., Efficient and Robust Compressive Sensing using HighQuality Expander Graphs. Submitted to the IEEE transaction on Information Theory
, 2008
"... Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse (with k ≪ n) can be fully recovered using O(k log n k) measurements and only O(k log n) simple recovery iteration ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Abstract—Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse (with k ≪ n) can be fully recovered using O(k log n k) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with 4 the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be made arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple binary search tree. We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the recovery time complexity. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds in sublinear time a ksparse signal which approximates the original signal with very high precision. I.
Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property
"... Compressed Sensing aims to capture attributes of ksparse signals using very few measurements. In the standard Compressed Sensing paradigm, the N × C measurement matrix Φ is required to act as a near isometry on the set of all ksparse signals (Restricted Isometry Property or RIP). If Φ satisfies th ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Compressed Sensing aims to capture attributes of ksparse signals using very few measurements. In the standard Compressed Sensing paradigm, the N × C measurement matrix Φ is required to act as a near isometry on the set of all ksparse signals (Restricted Isometry Property or RIP). If Φ satisfies the RIP, then Basis Pursuit or Matching Pursuit recovery algorithms can be used to recover any ksparse vector α from the N measurements Φα. Although it is known that certain probabilistic processes generate N × C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix Φ has this property, crucial for the feasibility of the standard recovery algorithms. In contrast this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of ksparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sublinear in C, and only quadratic in N, as compared to the superlinear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worstcase analysis that prevails in standard Compressed Sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, DelsarteGoethals codes, and extended BCH codes.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
THE STATISTICAL RESTRICTED ISOMETRY PROPERTY AND THE WIGNER SEMICIRCLE DISTRIBUTION OF INCOHERENT DICTIONARIES
, 903
"... Abstract. In this paper we formulate and prove a statistical version of the CandèsTao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the e ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. In this paper we formulate and prove a statistical version of the CandèsTao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around λ = 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of ApplebaumHowardSearleCalderbank concerning RIP for the Heisenberg dictionary of chirp like functions. Digital signals, or simply signals, can be thought of as complex valued functions on the finite field Fp, where p is a prime number. The space of signals H = C (Fp) is a Hilbert space of dimension p, with the inner product given by the standard formula
Sparse recovery using sparse matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach “works ” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good ” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time. 1