## An Information-Theoretic Approach to Distributed Compressed Sensing (2005)

### Cached

### Download Links

Venue: | in Proc. 43rd Allerton Conf. Communication, Control, and Computing |

Citations: | 17 - 6 self |

### BibTeX

@INPROCEEDINGS{Baron05aninformation-theoretic,

author = {Dror Baron and Marco F. Duarte and Shriram Sarvotham and Michael B. Wakin and Richard G. Baraniuk},

title = {An Information-Theoretic Approach to Distributed Compressed Sensing},

booktitle = {in Proc. 43rd Allerton Conf. Communication, Control, and Computing},

year = {2005}

}

### Years of Citing Articles

### OpenURL

### Abstract

Compressed sensing is an emerging field based on the revelation that a small group of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a concept that we term the joint sparsity of a signal ensemble. We study a model for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays. 1

### Citations

8563 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...tra-signal correlations, and there has been only limited progress on distributed coding of so-called “sources with memory.” The direct implementation for such sources would require huge lookup tables =-=[3]-=-, and approaches combining pre- or post-processing of the data to remove intra-signal correlations combined with Slepian-Wolf coding for the inter-signal correlations appear to have limited applicabil... |

2026 |
A Wavelet Tour of Signal Processing
- Mallat
- 1999
(Show Context)
Citation Context ...s employ a decorrelating transform such as an exact or approximate Karhunen-Loève transform (KLT) to compact a correlated signal’s energy into just a few essential coefficients. Such transform coders =-=[1]-=- exploit the fact that many signals have a sparse representation in terms of some basis, meaning that a small number K of adaptively chosen transform coefficients can be transmitted or stored rather t... |

1715 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...ging problem with many potential applications. 1.2 Compressed sensing (CS) A new framework for single-signal sensing and compression has developed recently under the rubric of Compressed Sensing (CS) =-=[10,11]-=-. CS builds on the surprising revelation that a signal having a sparse representation in one basis can be recovered from a small number of projections onto a second basis that is incoherent with the f... |

1651 | Atomic decomposition by basis pursuit - Chen, Donoho, et al. - 1999 |

1297 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...ging problem with many potential applications. 1.2 Compressed sensing (CS) A new framework for single-signal sensing and compression has developed recently under the rubric of Compressed Sensing (CS) =-=[10,11]-=-. CS builds on the surprising revelation that a signal having a sparse representation in one basis can be recovered from a small number of projections onto a second basis that is incoherent with the f... |

867 |
Noiseless coding of correlated information sources
- Slepian, Wolf
(Show Context)
Citation Context ...aightforward. The measurement rate can be computed by considering both common and different parts of Φ1 and Φ2. Our measurement rate bounds are strikingly similar to those in the Slepian-Wolf theorem =-=[4]-=-, where each signal must be encoded above its conditional entropy rate, and the ensemble must be coded above the joint entropy rate. Yet despite these advantages, the achievable measurement rate regio... |

832 | Near optimal signal recovery from random projections: Universal encoding strategies
- Candès, Tao
- 2006
(Show Context)
Citation Context ...versal in the sense that the sensor can apply the same measurement mechanism no matter what basis the signal is sparse in (and thus the coding algorithm is independent of the sparsity-inducing basis) =-=[11,12]-=-. A variety of algorithms have been proposed for signal recovery [10,11,14–16], each requiring a slightly different constant c (see Section 2.2). While powerful, the CS theory at present is designed m... |

424 | The Dantzig selector: statistical estimation when p is much larger than n
- Candès, Tao
(Show Context)
Citation Context ...led as ℓp sparse with 0 < p ≤ 1. Quantized and noisy measurements: Our (random) measurements will be real numbers; quantization will gradually degrade the reconstruction quality as it becomes coarser =-=[20]-=-. Moreover, noise will often corrupt the measurements, making them not strictly sparse in any basis. Fast algorithms: In some applications, linear programming could prove too computationally intense. ... |

310 | Distributed source coding using syndromes (DISCUS): design and construction
- Pradhan, Ramchandran
(Show Context)
Citation Context ...ee approach in which each sensor node could communicate losslessly at its conditional entropy rate, rather than at its individual entropy rate. Unfortunately, however, most existing coding algorithms =-=[5,6]-=- exploit only inter-signal correlations and not intra-signal correlations, and there has been only limited progress on distributed coding of so-called “sources with memory.” The direct implementation ... |

266 |
Connecting the Physical World with Pervasive Networks
- Estrin, Culler, et al.
- 2002
(Show Context)
Citation Context ...le signals, for which there has been less progress. As a motivating example, consider a sensor network, in which a number of distributed nodes acquire data and report it to a central collection point =-=[2]-=-. In such networks, communication energy and bandwidth are often scarce resources, making the reduction of communication critical. Fortunately, since the sensors presumably observe related phenomena, ... |

168 | Signal reconstruction from noisy random projections
- Haupt, Nowak
(Show Context)
Citation Context ...y at present is designed mainly to exploit intra-signal structures at a single sensor. To the best of our knowledge, the only work to date that applies CS in a multi-sensor setting is Haupt and Nowak =-=[17]-=-. However, while their scheme exploits inter-signal correlations, it ignores intra-signal correlations. 1 Roughly speaking, incoherence means that no element of one basis has a sparse representation i... |

150 |
Distributed source coding for sensor networks
- Xiong, Liveris, et al.
(Show Context)
Citation Context ...ee approach in which each sensor node could communicate losslessly at its conditional entropy rate, rather than at its individual entropy rate. Unfortunately, however, most existing coding algorithms =-=[5,6]-=- exploit only inter-signal correlations and not intra-signal correlations, and there has been only limited progress on distributed coding of so-called “sources with memory.” The direct implementation ... |

147 | Signal recovery from partial information via orthogonal matching pursuit
- Tropp, Gilbert
(Show Context)
Citation Context ... K and signal length N. With appropriate oversampling, reconstruction via Basis Pursuit is robust to measurement noise and quantization error [10]. Iterative greedy algorithms have also been proposed =-=[13]-=-, allowing even faster reconstruction at the expense of more measurements. 3 Joint Sparsity Model and Recovery Strategies In the first part of this section, we generalize the notion of a signal being ... |

74 |
Error correction via linear programming
- Candès, Rudelson, et al.
- 2005
(Show Context)
Citation Context ...l suffice [18]. Unfortunately, solving this ℓ0 optimization problem is prohibitively complex, requiring a combinatorial enumeration of the � � N possible sparse subspaces; in K fact it is NP-complete =-=[14]-=-. Yet another challenge is robustness; with little more than K measurements, the recovery may be very poorly conditioned. In fact, both of these considerations (computational complexity and robustness... |

65 | The distributed karhunen loève transform
- Gastpar, Dragotti, et al.
(Show Context)
Citation Context ... both types of correlation might allow a substantial savings on communication costs [3–6]. A number of distributed coding algorithms have been developed that involve collaboration amongst the sensors =-=[7,8]-=-. Any collaboration, however, involves some amount of inter-sensor communication overhead. The Slepian-Wolf framework for lossless distributed coding [3–6] offers a collaboration-free approach in whic... |

57 | Neighborlyness of randomly-projected simplices in high dimensions
- Donoho, Tanner
- 2005
(Show Context)
Citation Context ...)+ǫ)K random projections, ǫ > 0, converges to 1 as N → ∞. In contrast, the probability of recovering x via Basis Pursuit from (c(S) − ǫ)K random projections converges to 0 as N → ∞. Donoho and Tanner =-=[15,16]-=- have characterized this oversampling factor c(S) precisely; we have discovered a useful rule of thumb that c(S) ≈ log 2(1+S −1 ). In the remainder of the paper, we often use the abbreviated notation ... |

50 |
High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension
- Donoho
(Show Context)
Citation Context ...)+ǫ)K random projections, ǫ > 0, converges to 1 as N → ∞. In contrast, the probability of recovering x via Basis Pursuit from (c(S) − ǫ)K random projections converges to 0 as N → ∞. Donoho and Tanner =-=[15,16]-=- have characterized this oversampling factor c(S) precisely; we have discovered a useful rule of thumb that c(S) ≈ log 2(1+S −1 ). In the remainder of the paper, we often use the abbreviated notation ... |

8 |
Routing Explicit Side Information for Data Compression
- Luo, Pottie
- 2005
(Show Context)
Citation Context ... both types of correlation might allow a substantial savings on communication costs [3–6]. A number of distributed coding algorithms have been developed that involve collaboration amongst the sensors =-=[7,8]-=-. Any collaboration, however, involves some amount of inter-sensor communication overhead. The Slepian-Wolf framework for lossless distributed coding [3–6] offers a collaboration-free approach in whic... |

2 | Universal coding for correlated sources with memory
- Uyematsu
- 2001
(Show Context)
Citation Context ...rocessing of the data to remove intra-signal correlations combined with Slepian-Wolf coding for the inter-signal correlations appear to have limited applicability. Finally, a recent paper by Uyematsu =-=[9]-=- provides compression of spatially correlated sources with memory, but the solution is specific to lossless distributed compression and cannot be readily extended to lossy settings. We conclude that t... |

1 |
Distributed compressed sensing,” Tech. Rep., Available at http://www.dsp.rice.edu
- Baron, Wakin, et al.
(Show Context)
Citation Context ... + 1 random measurements 4 The ℓ0 norm �θ�0 merely counts the number of nonzero entries in the vector θ. 43rd Allerton Conference on Communication, Control, and Computing, September 2005swill suffice =-=[18]-=-. Unfortunately, solving this ℓ0 optimization problem is prohibitively complex, requiring a combinatorial enumeration of the � � N possible sparse subspaces; in K fact it is NP-complete [14]. Yet anot... |