## Distributed compressed sensing (2005)

### Cached

### Download Links

Citations: | 90 - 22 self |

### BibTeX

@TECHREPORT{Baron05distributedcompressed,

author = {Dror Baron and Michael B. Wakin and Marco F. Duarte and Shriram Sarvotham and Richard G. Baraniuk},

title = {Distributed compressed sensing},

institution = {},

year = {2005}

}

### Years of Citing Articles

### OpenURL

### Abstract

Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.

### Citations

9087 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...buted coding of so-called “sources with memory.” (We briefly mention some limitations here and elaborate in Section 2.1.3.) The direct implementation for such sources would require huge lookup tables =-=[13, 25]-=-. Furthermore, approaches combining pre- or post-processing of the data to remove intra-signal correlations combined with Slepian-Wolf coding for the inter-signal correlations appear to have limited a... |

2267 |
A wavelet tour of signal processing
- Mallat
- 1998
(Show Context)
Citation Context ...orm coefficients can be transmitted or stored rather than N ≫ K signal samples. For example, smooth signals are sparse in the Fourier basis, and piecewise smooth signals are sparse in a wavelet basis =-=[7]-=-; the commercial coding standards MP3 [8], JPEG [9], and JPEG2000 [10] directly exploit this sparsity. 1.1 Distributed source coding While the theory and practice of compression have been well develop... |

1864 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...ed in Theorem 10 involves combinatorial searches for estimating the innovation components. More efficient techniques could also be employed (including several proposed for CS in the presence of noise =-=[38, 39, 45, 48, 51]-=-). It is reasonable to expect similar behavior; as the error in estimating the common component diminishes, these techniques should perform similarly to their noiseless analogues (Basis Pursuit [45, 4... |

1775 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ...he vertical axis indicates the probability that the linear program yields the correct answer x as a function of the oversampling factor c = M/K. This optimization problem, also known as Basis Pursuit =-=[51]-=-, is significantly more approachable and can be solved with traditional linear programming techniques whose computational complexities are polynomial in N. There is no free lunch, however; according t... |

1401 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...ng (CS) A new framework for single-signal sensing and compression has developed recently under the rubric of Compressed Sensing (CS). CS builds on the ground-breaking work of Candès, Romberg, and Tao =-=[27]-=- and Donoho [28], who showed that if a signal has a sparse representation in one basis then it can be recovered from a small number of projections onto a second basis that is incoherent with the first... |

1393 | Near Shannon limit error-correcting coding and decoding: Turbo-codes
- Berrou, Glavieux, et al.
- 1993
(Show Context)
Citation Context ...n to match across signals — as in JSM-2 — then more powerful algorithms like SOMP can be used. The ACIE algorithm is similar in spirit to other iterative estimation algorithms, such as turbo decoding =-=[65]-=-. 5.3.3 Simulations for JSM-3 We now present simulations of JSM-3 recovery for the following scenario. Consider J signals of length N = 50 containing a common white noise component zC(n) ∼ N(0,1) for ... |

1305 | Embedded image coding using zerotrees of wavelet coefficients - Shapiro - 1993 |

917 |
J.K.Wolf. Noiseless coding of correlated information sources
- Slepian
- 1973
(Show Context)
Citation Context ... j � (14) Mj ≥ K ′ j + K∩ + 1, j = 1,2,... ,J, (15a) Mj ≥ K ′ C + � j 21 K ′ j + J · K∩ + 1. (15b)sThe measurement rates required in Theorem 4 are somewhat similar to those in the SlepianWolf theorem =-=[14]-=-, where each signal must be encoded above its conditional entropy rate, and the entire collection must be coded above the joint entropy rate. In particular, we see that the measurement rate bounds ref... |

887 | Near-optimal signal recovery from random projections: universal encoding strategies
- Candes, Tao
- 2006
(Show Context)
Citation Context ...versal in the sense that the sensor can apply the same measurement mechanism no matter what basis the signal is sparse in (and thus the coding algorithm is independent of the sparsity-inducing basis) =-=[28, 29]-=-. While powerful, the CS theory at present is designed mainly to exploit intra-signal structures at a single sensor. To the best of our knowledge, the only work to date that applies CS in a multisenso... |

870 |
Introduction to Graph Theory
- West
- 1999
(Show Context)
Citation Context ...note the single node matched to k by an edge in E, and we set C(k) = j. To prove the existence of such a matching within the graph, we invoke a version of Hall’s marriage theorem for bipartite graphs =-=[71]-=-. Hall’s theorem states that within a bipartite graph (V1,V2,E), there exists a matching that assigns each element of V1 to a unique element of V2 if for any collection of elements Π ⊆ V1, the set E(Π... |

802 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...ed in Theorem 10 involves combinatorial searches for estimating the innovation components. More efficient techniques could also be employed (including several proposed for CS in the presence of noise =-=[38, 39, 45, 48, 51]-=-). It is reasonable to expect similar behavior; as the error in estimating the common component diminishes, these techniques should perform similarly to their noiseless analogues (Basis Pursuit [45, 4... |

776 |
Wireless integrated network sensors
- Pottie, Kaiser
- 2000
(Show Context)
Citation Context ...y large number of distributed sensor nodes can be programmed to perform a variety of data acquisition tasks as well as to network themselves to communicate their results to a central collection point =-=[11, 12]-=-. In many sensor networks, and in particular battery-powered ones, communication energy and bandwidth are scarce resources; both factors make the reduction of communication critical. Fortunately, sinc... |

703 | Decoding by linear programming - Candes, Tao - 2005 |

688 |
Compressive sampling
- Candès
(Show Context)
Citation Context ...e single-signal CS literature that we should be able to leverage, including variants of Basis Pursuit with Denoising [63, 69], robust iterative recovery algorithms [64], CS noise sensitivity analysis =-=[25, 34]-=-, the Dantzig Selector [33], and one-bit CS [70]. Third, in some applications, the linear program associated with some DCS decoders (in JSM-1 and JSM-3) could prove too computationally intense. As we ... |

445 | The Dantzig selector: statistical estimation when p is much larger than n,” Annals of Statistics
- Candès, Tao
(Show Context)
Citation Context ...rformance. For example, the measurements will typically be real numbers that must be quantized and encoded, which will gradually degrade the reconstruction quality as the quantization becomes coarser =-=[39]-=- (see also Section 7). Characterizing DCS in light of practical considerations such as rate-distortion tradeoffs, power consumption in sensor networks, etc., are topics of ongoing research [40]. 1.5 P... |

369 | Cosamp: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...al from incoherent measurements with high probability, at the expense of slightly more measurements, [26, 43]. Algorithms inspired by OMP, such as regularized orthogonal matching pursuit [44], CoSaMP =-=[45]-=-, and Subspace Pursuit [46] have been shown to attain similar guarantees to those of their optimization-based counterparts. In the following, we will exploit both Basis Pursuit and greedy algorithms f... |

327 | Distributed source coding using syndromes (DISCUS) design and construction
- Pradhan, Ramchandran
- 1999
(Show Context)
Citation Context ...as the distinct advantage that the sensors need not collaborate while encoding their measurements, which saves valuable communication overhead. Unfortunately, however, most existing coding algorithms =-=[15, 16]-=- exploit only inter-signal correlations and not intra-signal correlations. To date there has been only limited progress on distributed coding of so-called “sources with memory.” (We briefly mention so... |

325 | Compressive sensing
- Baraniuk
- 2007
(Show Context)
Citation Context ...pt is that the ordering of these coefficients is important. For JSM-2, we can extend the notion of simultaneous sparsity for ℓp-sparse signals whose sorted coefficients obey roughly the same ordering =-=[66]-=-. This condition could perhaps be enforced as an ℓp constraint on the composite signal ⎧ ⎫ ⎨ J∑ J∑ J∑ ⎬ |xj(1)|, |xj(2)|, ..., |xj(N)| ⎩ ⎭ . j=1 j=1 Second, (random) measurements are real numbers; qua... |

320 | Image compression through wavelet transform coding - Vore, Jawerth, et al. - 1992 |

320 | A simple proof of the restricted isometry property for random matrices,” Constructive Approximation
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ... that is incoherent with all others. Hence, when using a random basis, CS is universal in the sense that the sensor can apply the same measurement mechanism no matter what basis sparsifies the signal =-=[27]-=-. While powerful, the CS theory at present is designed mainly to exploit intra-signal structures at a single sensor. In a multi-sensor setting, one can naively obtain separate measurements from each s... |

300 | Curvelets - a surprisingly effective nonadaptive representation for objects with edges
- Candès, Donoho
- 2000
(Show Context)
Citation Context ...e ℓp norm, 5 we can write that �θ�0 = K. Various expansions, including wavelets [7], Gabor bases [7], 5 The ℓ0 “norm” �θ�0 merely counts the number of nonzero entries in the vector θ. 8 ℓ=1scurvelets =-=[43]-=-, etc., are widely used for representation and compression of natural signals, images, and other data. In this paper, we will focus on exactly K-sparse signals and defer discussion of the more general... |

296 |
Connecting the Physical World with Pervasive Networks
- Estrin, Culler, et al.
- 2002
(Show Context)
Citation Context ...y large number of distributed sensor nodes can be programmed to perform a variety of data acquisition tasks as well as to network themselves to communicate their results to a central collection point =-=[11, 12]-=-. In many sensor networks, and in particular battery-powered ones, communication energy and bandwidth are scarce resources; both factors make the reduction of communication critical. Fortunately, sinc... |

221 | Algorithms for simultaneous sparse approximation. part I: Greedy pursuit
- Tropp, Gilbert, et al.
- 2006
(Show Context)
Citation Context ... our DCS framework. SOMP is a variant of OMP that seeks to identify Ω one element at a time. (A similar 31ssimultaneous sparse approximation algorithm has been proposed using convex optimization; see =-=[57]-=- for details.) We dub the DCS-tailored SOMP algorithm DCS-SOMP. To adapt the original SOMP algorithm to our setting, we first extend it to cover a different measurement basis Φj for each signal xj. Th... |

208 |
JPEG Still Image Data Compression Standard
- Pennebaker, Mitchell
- 1993
(Show Context)
Citation Context ...r than N ≫ K signal samples. For example, smooth signals are sparse in the Fourier basis, and piecewise smooth signals are sparse in a wavelet basis [7]; the commercial coding standards MP3 [8], JPEG =-=[9]-=-, and JPEG2000 [10] directly exploit this sparsity. 1.1 Distributed source coding While the theory and practice of compression have been well developed for individual signals, many applications involv... |

177 | Signal reconstruction from noisy random projections
- Haupt, Nowak
- 2006
(Show Context)
Citation Context ...ry at present is designed mainly to exploit intra-signal structures at a single sensor. To the best of our knowledge, the only work to date that applies CS in a multisensor setting is Haupt and Nowak =-=[38]-=- (see Section 2.2.6). However, while their scheme exploits inter-signal correlations, it ignores intra-signal correlations. 1.3 Distributed compressed sensing (DCS) In this paper we introduce a new th... |

160 | Space-frequency quantization for wavelet image coding - Xiong, Ramchandran, et al. - 1997 |

155 | Distributed source coding for sensor networks
- Xiong, Liveris, et al.
- 2004
(Show Context)
Citation Context ...as the distinct advantage that the sensors need not collaborate while encoding their measurements, which saves valuable communication overhead. Unfortunately, however, most existing coding algorithms =-=[15, 16]-=- exploit only inter-signal correlations and not intra-signal correlations. To date there has been only limited progress on distributed coding of so-called “sources with memory.” (We briefly mention so... |

151 | Signal recovery from partial information via orthogonal matching pursuit
- Tropp, Gilbert
- 2005
(Show Context)
Citation Context ...e of slightly more measurements, iterative greedy algorithms have also been developed to recover the signal x from the measurements y. Examples include the iterative Orthogonal Matching Pursuit (OMP) =-=[30]-=-, matching pursuit (MP), and tree matching pursuit (TMP) [35, 36] algorithms. OMP, for example, iteratively selects the vectors from the holographic basis ΦΨ that contain most of the energy of the mea... |

145 | Sparse solutions to linear inverse problems with multiple measrement vectors
- Rao, Kreutz-Delgado
- 1998
(Show Context)
Citation Context ...rocessing algorithms. Another useful application for JSM-2 is MIMO communication [34]. Similar signal models have been considered by different authors in the area of simultaneous sparse approximation =-=[34, 52, 53]-=-. In this setting, a collection of sparse signals share the same expansion vectors from a redundant dictionary. The sparse approximation can be recovered via greedy algorithms such as Simultaneous Ort... |

141 |
Prism: A new robust video coding architecture based on distributed compression principles
- Puri, Ramchandram
- 2002
(Show Context)
Citation Context ...he advantage of moving the bulk of the computational complexity to the video decoder. Puri and Ramchandran have proposed a similar scheme based on Wyner-Ziv distributed encoding in their PRISM system =-=[54]-=-. In general, JSM-3 may be invoked for ensembles with significant inter-signal correlations but insignificant intra-signal correlations. 3.4 Refinements and extensions Each of the JSMs proposes a basi... |

126 | Quantitative robust uncertainty principles and optimally sparse decompositions - Candès, Romberg |

106 | On network correlated data gathering - Cristescu, Beferull-Lozano, et al. |

97 | Modelling data-centric routing in wireless sensor networks - Krishnamachari, Estrin, et al. - 2002 |

94 |
A proof of the data compression theorem of Slepian and Wolf for ergodic sources
- Cover
- 1975
(Show Context)
Citation Context ...buted coding of so-called “sources with memory.” (We briefly mention some limitations here and elaborate in Section 2.1.3.) The direct implementation for such sources would require huge lookup tables =-=[13, 25]-=-. Furthermore, approaches combining pre- or post-processing of the data to remove intra-signal correlations combined with Slepian-Wolf coding for the inter-signal correlations appear to have limited a... |

84 |
Coding Theorems of Information Theory
- Wolfowitz
- 1962
(Show Context)
Citation Context ... per-symbol rate that enables lossless compression. Various techniques such as arithmetic coding [13] can be used to compress near the entropy rate. 2.1.2 Distributed source coding Information theory =-=[13, 17]-=- has also provided tools that characterize the performance of distributed source coding. For correlated length-N sequences x1 and x2 generated by sources X1 and X2 over discrete alphabets X1 and X2, w... |

83 | An evaluation of multi-resolution storage for sensor networks
- Ganesan, Greenstein, et al.
- 2003
(Show Context)
Citation Context ...d on predictive coding [18–20], a distributed KLT [21], and distributed wavelet transforms [22, 23]. Three-dimensional wavelets have been proposed to exploit both inter- and intra-signal correlations =-=[24]-=-. Note, however, that any collaboration involves some amount of inter-sensor communication overhead. In the Slepian-Wolf framework for lossless distributed coding [13–17], the availability of correlat... |

83 | Recovery of exact sparse representations in the presence of noise,” INRIA
- Fuchs
- 2004
(Show Context)
Citation Context ...istortion consequences in the DCS setting are topics for future work, there has been work in the single-signal CS literature that we should be able to leverage, including Basis Pursuit with Denoising =-=[28, 45, 51, 58]-=-, robust iterative reconstruction algorithms [38], CS noise sensitivity analysis [27], and the Dantzig Selector [39]. Fast algorithms: In some applications, the linear program associated with some DCS... |

80 |
Error correction via linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...ving this ℓ0 optimization problem is prohibitively complex, requiring a combinatorial enumeration of the � � N K possible sparse subspaces. In fact, the ℓ0-recovery problem is known to be NP-complete =-=[31]-=-. Yet another challenge is robustness; in the setting of Theorem 2, the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness) can be ... |

73 | Recovery algorithms for vector valued data with joint sparsity constraints
- Fornasier, Rauhut
(Show Context)
Citation Context ...k [47] and to the continuous-time setting [48]. Since the original submission of this paper, additional work has focused on the analysis and proposal of recovery algorithms for jointly sparse signals =-=[49, 50]-=-. 3 Joint Sparsity Signal Models In this section, we generalize the notion of a signal being sparse in some basis to the notion of an ensemble of signals being jointly sparse. 3.1 Notation We will use... |

70 | The distributed KarhunenLoève transform
- Gastpar, Dragotti, et al.
- 2002
(Show Context)
Citation Context ...on point [13–17]. A number of distributed coding algorithms have been developed that involve collaboration amongst the sensors, including several based on predictive coding [18–20], a distributed KLT =-=[21]-=-, and distributed wavelet transforms [22, 23]. Three-dimensional wavelets have been proposed to exploit both inter- and intra-signal correlations [24]. Note, however, that any collaboration involves s... |

65 | Decentralized Compression and Predistribution via Randomized Gossiping - Rabbat, Haupt, et al. - 2006 |

63 | Subspace pursuit for compressed sensing: Closing the gap between performance and complexity
- Dai, Milenkovich
(Show Context)
Citation Context ...ents with high probability, at the expense of slightly more measurements, [26, 43]. Algorithms inspired by OMP, such as regularized orthogonal matching pursuit [44], CoSaMP [45], and Subspace Pursuit =-=[46]-=- have been shown to attain similar guarantees to those of their optimization-based counterparts. In the following, we will exploit both Basis Pursuit and greedy algorithms for recovering jointly spars... |

62 | Reduce and boost : Recovering arbitrary sets of jointly sparse vectors
- Mishali, Eldar
- 2008
(Show Context)
Citation Context ...structure and that exploit both inter- and intra-signal correlations. Recent work has adapted DCS to the finite rate of innovation signal acquisition framework [47] and to the continuous-time setting =-=[48]-=-. Since the original submission of this paper, additional work has focused on the analysis and proposal of recovery algorithms for jointly sparse signals [49, 50]. 3 Joint Sparsity Signal Models In th... |

53 | Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms
- Gribonval, Rauhut, et al.
- 2008
(Show Context)
Citation Context ...k [47] and to the continuous-time setting [48]. Since the original submission of this paper, additional work has focused on the analysis and proposal of recovery algorithms for jointly sparse signals =-=[49, 50]-=-. 3 Joint Sparsity Signal Models In this section, we generalize the notion of a signal being sparse in some basis to the notion of an ensemble of signals being jointly sparse. 3.1 Notation We will use... |

51 |
High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension
- Donoho
- 2006
(Show Context)
Citation Context ...early the probability increases with the number of measurements M = cK. Moreover, the curves become closer to a step function as N grows. In an illuminating series of recent papers, Donoho and Tanner =-=[32, 33]-=- have characterized the oversampling factor c(S) precisely. With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust to measurement noise and quantization error [27]. In... |

47 | Distributed compressed sensing of jointly sparse signals
- Duarte, Sarvotham, et al.
- 2005
(Show Context)
Citation Context ...and AFOSR. Preliminary versions of this work have appeared at the 43rd Allerton Conference on Communication, Control, and Computing [1], the 39th Asilomar Conference on Signals, Systems and Computers =-=[2]-=-, and the 19th Conference on Neural Information Processing Systems [3]. Email: {drorb, wakin, duarte, shri, richb}@rice.edu; Web: dsp.rice.edu/csshave a sparse representation in terms of some basis, m... |

43 |
Fast reconstruction of piecewise smooth signals from incoherent projections. SPARS05
- Duarte, Wakin, et al.
- 2005
(Show Context)
Citation Context ...have also been developed to recover the signal x from the measurements y. Examples include the iterative Orthogonal Matching Pursuit (OMP) [30], matching pursuit (MP), and tree matching pursuit (TMP) =-=[35, 36]-=- algorithms. OMP, for example, iteratively selects the vectors from the holographic basis ΦΨ that contain most of the energy of the measurement vector y. The selection at each iteration is made based ... |

39 | Universal lossless source coding with the Burrows Wheeler transform
- Effros, Visweswariah, et al.
- 2002
(Show Context)
Citation Context ...l symbols and thus can be viewed as the analogue of the Karhunen-Lòeve transform for sequences over finite alphabets. The BWT handles temporal correlation efficiently in single-source lossless coding =-=[41, 42]-=-. For distributed coding, the BWT could be proposed to remove temporal correlations by pre-processing the sequences prior to Slepian-Wolf coding. Unfortunately, the BWT is input-dependent, and hence t... |

38 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
- 2005
(Show Context)
Citation Context ...early the probability increases with the number of measurements M = cK. Moreover, the curves become closer to a step function as N grows. In an illuminating series of recent papers, Donoho and Tanner =-=[32, 33]-=- have characterized the oversampling factor c(S) precisely. With appropriate oversampling, reconstruction via Basis Pursuit is also provably robust to measurement noise and quantization error [27]. In... |

37 | Counting faces of randomly projected polytopes when the projection radically lowers dimension - Donoho, Tanner - 2009 |