Results 1  10
of
36
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Subspace pursuit for compressive sensing: Closing the gap between performance and complexity
, 2008
"... Abstract — We propose a new algorithm, termed subspace pursuit, for signal reconstruction of sparse and compressible signals with and without noisy perturbations. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniqu ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
Abstract — We propose a new algorithm, termed subspace pursuit, for signal reconstruction of sparse and compressible signals with and without noisy perturbations. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruction capability of the same order as that of ℓ1norm optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm is capable of exactly reconstructing an arbitrary sparse signals, provided that the linear measurements satisfy the restricted isometry property with a constant parameter which can be described in a closed form. In the noisy setting and the case where the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by a constant multiple of the measurement and signal pertubation energy. Index Terms — Compressive sensing, orthogonal matching pursuit, reconstruction algorithms, restricted isometry property, sparse signal reconstruction I.
Bayesian Compressed Sensing via Belief Propagation
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses ( log ()) measurements and ( log 2 ()) computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery
 in in Applied and Computational Harmonic Analysis
, 2009
"... Abstract—Compressed sensing is a novel technique to acquire sparse signals with few measurements. Normally, compressed sensing uses random projections as measurements. Here we design deterministic measurements and an algorithm to accomplish signal recovery with computational efficiently. A measureme ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
Abstract—Compressed sensing is a novel technique to acquire sparse signals with few measurements. Normally, compressed sensing uses random projections as measurements. Here we design deterministic measurements and an algorithm to accomplish signal recovery with computational efficiently. A measurement matrix is designed with chirp sequences forming the columns. Chirps are used since an efficient method using FFTs can recover the parameters of a small superposition. We show empirically that this type of matrix is valid as compressed sensing measurements. This is done by a comparison with random projections and a modified reduced isometry property. Further, by implementing our algorithm, simulations show successful recovery of signals with sparsity levels similar to those possible by Matching Pursuit with random measurements. For sufficiently sparse signals, our algorithm recovers the signal with computational complexity O(K log K) for K measurements. This is a significant improvement over existing algorithms. I.
Measurements vs. bits: Compressed sensing meets information theory
 in Proc. Allerton Conf. on Comm., Control, and Computing
, 2006
"... Abstract — Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noisef ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Abstract — Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noisefree measurements. However, measurement noise is unavoidable in analog systems and must be accounted for. We demonstrate that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction. To establish this result, we evaluate the information contained in the measurements by viewing the measurement system as an information theoretic channel. Combining the capacity of this channel with the ratedistortion function of the sparse signal, we lower bound the ratedistortion performance of a compressed sensing system. Our approach concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the performance of specific reconstruction algorithms. I.
1 Sparse Recovery Using Sparse Matrices
"... Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to seve ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing and group testing. I.
DESIGNING COMPRESSIVE SENSING DNA MICROARRAYS
"... A Compressive Sensing Microarray (CSM) is a new device for DNAbased identification of target organisms that leverages the nascent theory of Compressive Sensing (CS). In contrast to a conventional DNA microarray, in which each genetic sensor spot is designed to respond to a single target organism, i ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
A Compressive Sensing Microarray (CSM) is a new device for DNAbased identification of target organisms that leverages the nascent theory of Compressive Sensing (CS). In contrast to a conventional DNA microarray, in which each genetic sensor spot is designed to respond to a single target organism, in a CSM each sensor spot responds to a group of targets. As a result, significantly fewer total sensor spots are required. In this paper, we study how to design group identifier probes that simultaneously account for both the constraints from the CS theory and the biochemistry of probetarget DNA hybridization. We employ Belief Propagation as a CS recovery method to estimate target concentrations from the microarray intensities.
LSCSresidual (LSCS): Compressive sensing on the least squares residual
 IEEE TSP
"... Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed solution, LSCSresidual (LSCS), is to replace compressed sensing (CS) on the observation by CS on the least squares (LS) residual computed using the previous estimate of the support. We bound CSresidual error and show that when the number of available measurements is small, the bound is much smaller than that on CS error if the sparsity pattern changes slowly enough. Most importantly, under fairly mild assumptions, we show “stability ” of LSCS over time for a signal model that allows support additions and removals, and that allows coefficients to gradually increase (decrease) until they reach a constant value (become zero). By “stability, ” we mean that the number of misses and extras in the support estimate remain bounded by timeinvariant values (in turn implying a timeinvariant bound on LSCS error). Numerical experiments, and a dynamic MRI example, backing our claims are shown. Index Terms—Compressive sensing, least squares, recursive reconstruction, sparse reconstructions. I.
Quantifying Statistical Interdependence by Message Passing on Graphs  PART II: MultiDimensional Point Processes
, 2009
"... Stochastic event synchrony is a technique to quantify the similarity of pairs of signals. First, “events” are extracted from the two given time series. Next, one tries to align events from one time series with events from the other. The better the alignment, the more similar the two time series are ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
Stochastic event synchrony is a technique to quantify the similarity of pairs of signals. First, “events” are extracted from the two given time series. Next, one tries to align events from one time series with events from the other. The better the alignment, the more similar the two time series are considered to be. In Part I, onedimensional events are considered, this paper (Paper II) concerns multidimensional events. Although the basic idea is similar, the extension to multidimensional point processes involves a significantly harder combinatorial problem, and therefore, it is nontrivial. Also in the multidimensional, the problem of jointly computing the pairwise alignment and SES parameters is cast as a statistical inference problem. This problem is solved by coordinate descent, more specifically, by alternating the following two steps: (i) one estimates the SES parameters from a given pairwise alignment; (ii) with the resulting estimates, one refines the pairwise alignment. The SES parameters are computed by maximum a posteriori (MAP) estimation (Step 1), in