Results 1  10
of
57
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
 California Institute of Technology, Pasadena
, 2008
"... Abstract. Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery alg ..."
Abstract

Cited by 339 (4 self)
 Add to MetaCart
Abstract. Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimizationbased approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log 2 N), where N is the length of the signal. 1.
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
One sketch for all: Fast algorithms for compressed sensing
 In Proc. 39th ACM Symp. Theory of Computing
, 2007
"... Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extr ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no other algorithm in the literature simultaneously achieves all four of these desiderata.
Subspace pursuit for compressive sensing: Closing the gap between performance and complexity
, 2008
"... Abstract — We propose a new algorithm, termed subspace pursuit, for signal reconstruction of sparse and compressible signals with and without noisy perturbations. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniqu ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
Abstract — We propose a new algorithm, termed subspace pursuit, for signal reconstruction of sparse and compressible signals with and without noisy perturbations. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruction capability of the same order as that of ℓ1norm optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm is capable of exactly reconstructing an arbitrary sparse signals, provided that the linear measurements satisfy the restricted isometry property with a constant parameter which can be described in a closed form. In the noisy setting and the case where the signal is not exactly sparse, it can be shown that the mean squared error of the reconstruction is upper bounded by a constant multiple of the measurement and signal pertubation energy. Index Terms — Compressive sensing, orthogonal matching pursuit, reconstruction algorithms, restricted isometry property, sparse signal reconstruction I.
Bayesian Compressed Sensing via Belief Propagation
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses ( log ()) measurements and ( log 2 ()) computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
Compressed Sensing Reconstruction via Belief Propagation
, 2006
"... Compressed sensing is an emerging field that enables to reconstruct sparse or compressible signals from a small number of linear projections. We describe a specific measurement scheme using an LDPClike measurement matrix, which is a realvalued analogue to LDPC techniques over a finite alphabet. We ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Compressed sensing is an emerging field that enables to reconstruct sparse or compressible signals from a small number of linear projections. We describe a specific measurement scheme using an LDPClike measurement matrix, which is a realvalued analogue to LDPC techniques over a finite alphabet. We then describe the reconstruction details for mixture Gaussian signals. The technique can be extended to additional compressible signal models. 1
Explicit constructions for compressed sensing of sparse signals
 In Proceedings of the 19th Annual ACMSIAM Symposium on Discrete Algorithms
, 2008
"... Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional
Sudocodes  Fast Measurement and Reconstruction of Sparse Signals
 Proc. IEEE Int. Symposium on Information Theory (ISIT
, 2006
"... Abstract — Sudocodes are a new scheme for lossless compressive sampling and reconstruction of sparse signals. Consider a sparse signal x ∈ R N containing only K ≪ N nonzero values. Sudoencoding computes the codeword y ∈ R M via the linear matrixvector multiplication y = Φx, with K < M ≪ N. We pro ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Abstract — Sudocodes are a new scheme for lossless compressive sampling and reconstruction of sparse signals. Consider a sparse signal x ∈ R N containing only K ≪ N nonzero values. Sudoencoding computes the codeword y ∈ R M via the linear matrixvector multiplication y = Φx, with K < M ≪ N. We propose a nonadaptive construction of a sparse Φ comprising only the values 0 and 1; hence the computation of y involves only sums of subsets of the elements of x. An accompanying sudodecoding strategy efficiently recovers x given y. Sudocodes require only M = O(K log(N)) measurements for exact reconstruction with worstcase computational complexity O(K log(K) log(N)). Sudocodes can be used as erasure codes for realvalued data and have potential applications in peertopeer networks and distributed data storage systems. They are also easily extended to signals that are sparse in arbitrary bases. I.
Algorithmic linear dimension reduction in the ℓ1 norm for sparse vectors. Submitted for publication
, 2006
"... Abstract. We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or nearzeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several rese ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Abstract. We can recover approximately a sparse signal with limited noise, i.e, a vector of length d with at least d − m zeros or nearzeros, using little more than m log(d) nonadaptive linear measurements rather than the d measurements needed to recover an arbitrary signal of length d. Several research communities are interested in techniques for measuring and recovering such signals and a variety of approaches have been proposed. We focus on two important properties of such algorithms. • Uniformity. A single measurement matrix should work simultaneously for all signals. • Computational Efficiency. The time to recover such an msparse signal should be close to the obvious lower bound, mlog(d/m). To date, algorithms for signal recovery that provide a uniform measurement matrix with approximately the optimal number of measurements, such as first proposed by Donoho and his collaborators, and, separately, by Candès and Tao, are based on linear programming and require time poly(d) instead of m polylog(d). On the other hand, fast decoding algorithms to date from the Theoretical Computer Science and Database communities fail with probability at least 1 / poly(d), whereas we need failure probability no more than around 1/d m to achieve a uniform failure guarantee. This paper develops a new method for recovering msparse signals that is simultaneously uniform