Results 11  20
of
52
An InformationTheoretic Approach to Distributed Compressed Sensing
 in Proc. 43rd Allerton Conf. Communication, Control, and Computing
, 2005
"... Compressed sensing is an emerging field based on the revelation that a small group of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Compressed sensing is an emerging field based on the revelation that a small group of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multisignal ensembles that exploit both intra and intersignal correlation structures. The DCS theory rests on a concept that we term the joint sparsity of a signal ensemble. We study a model for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the SlepianWolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays. 1
Precise Undersampling Theorems
"... Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruc ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsityundersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsityundersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a ksparse signal of length N from n measurements, provided n � 2k · log(N/n), for (k, n, N) large, k ≪ N.
A fast reconstruction algorithm for deterministic compressive sensing using second order ReedMuller codes
 Conference on Information Sciences and Systems (CISS), Princeton, ISBN: 9781424422463, pp: 11  15
, 2008
"... Abstract—This paper proposes a deterministic compressed sensing matrix that comes by design with a very fast reconstruction algorithm, in the sense that its complexity depends only on the number of measurements n and not on the signal dimension N. The matrix construction is based on the second order ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Abstract—This paper proposes a deterministic compressed sensing matrix that comes by design with a very fast reconstruction algorithm, in the sense that its complexity depends only on the number of measurements n and not on the signal dimension N. The matrix construction is based on the second order ReedMuller codes and associated functions. This matrix does not have RIP uniformly with respect to all ksparse vectors, but it acts as a near isometry on ksparse vectors with very high probability. I.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
Compressed Synthetic Aperture Radar
, 2010
"... In this paper, we introduce a new synthetic aperture radar (SAR) imaging modality which can provide a highresolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. This new imaging scheme, ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In this paper, we introduce a new synthetic aperture radar (SAR) imaging modality which can provide a highresolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. This new imaging scheme, requires no new hardware components and allows the aperture to be compressed. It also presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced onboard storage requirements.
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.
Sparse Recovery of Positive Signals with Minimal Expansion
, 902
"... We investigate the sparse recovery problem of reconstructing a highdimensional nonnegative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We investigate the sparse recovery problem of reconstructing a highdimensional nonnegative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than ℓ1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of nonnegative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for ℓ1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a “unique ” vector in the constraint set, which opens the door to
InformationTheoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing
, 1112
"... We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms ca ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n + o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e. sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For ‘discrete ’ signals, i.e. signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result
Phase transitions for greedy sparse approximation algorithms. submitted
, 2009
"... A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous R ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimalorder uniform recovery guarantees. However, it is unclear when the RIPbased sufficient conditions on the algorithm are satisfied. We present a framework in which this task can be achieved; translating these conditions for Gaussian measurement matrices into requirements on the signal’s sparsity level, size and number of measurements. We illustrate this approach on three of the stateoftheart greedy algorithms: CoSaMP [27], Subspace Pursuit (SP) [11] and Iterated Hard Thresholding (IHT) [6]. Designed to allow a direct comparison of existing theory, our framework implies that IHT, the lowest of the three in computational cost, also requires fewer compressed sensing measurements than CoSaMP and SP. Key words: Compressed sensing, greedy algorithms, sparse solutions to underdetermined