## Sensing by Random Convolution (2007)

### Cached

### Download Links

Venue: | IEEE Int. Work. on Comp. Adv. Multi-Sensor Adaptive Proc., CAMPSAP |

Citations: | 66 - 4 self |

### BibTeX

@ARTICLE{Romberg07sensingby,

author = {Justin Romberg},

title = {Sensing by Random Convolution},

journal = {IEEE Int. Work. on Comp. Adv. Multi-Sensor Adaptive Proc., CAMPSAP},

year = {2007},

pages = {137--140}

}

### OpenURL

### Abstract

Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an n-dimensional signal which is S sparse in any fixed representation can be recovered from m � S log n measurements. We discuss two imaging scenarios — radar and Fourier optics — where convolution with a random pulse allows us to seemingly super-resolve fine-scale features, allowing us to recover high-resolution signals from low-resolution measurements. 1. Introduction. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that

### Citations

1745 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...on. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly =-=[7, 8, 10, 15, 32]-=-: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that traditional analog-to-d... |

1672 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...e called compressible. Sparsity is what makes it possible to recover a signal from undersampled data. Several methods for recovering sparse x0 from a limited number of measurements have been proposed =-=[12,16,29]-=-. Here we will focus on recovery via ℓ1 minimization. Given the measurements y = Φx0, we solve the convex optimization program min α ‖α‖ℓ1 subject to ΦΨα = y. (1.2) ∗ School of Electrical and Computer... |

1494 | Probability inequalities for sums of bounded random variables
- Hoeffding
- 1963
(Show Context)
Citation Context ...rm of the sum of a sequence of random variables/vectors. In this appendix, we briefly outline the concentration inequalities that we use in the proofs of Theorem 1.1 and 1.2. The Hoeffding inequality =-=[20]-=- is a classical tail bound on the sum of a sequence of independent random variables. Let Y1, Y2, . . . , Yn be independent, zero-mean random variables bounded by |Yk| ≤ ak, and let the random variable... |

1318 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
(Show Context)
Citation Context ...on. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly =-=[7, 8, 10, 15, 32]-=-: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that traditional analog-to-d... |

1052 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...ompose x0 as x0 = Ψα0, where α0 has at most S non-zero components. To recover x0 from y = Φx0, we take explicit advantage of this sparsity. A variety of methods have been recently proposed to do this =-=[8,25,31,38]-=-; in this paper we will focus on ℓ1 minimization (also called basis pursuit [15]). Given the measurements y, we solve min α ‖α‖1 subject to ΦΨα = y. (1.2) In words, the above searches for the sparsest... |

840 | Near optimal signal recovery from random projections: Universal encoding strategies
- Candès, Tao
- 2006
(Show Context)
Citation Context ...on. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly =-=[11,12,14,17]-=-: sparse signals can be recovered from a small number of linear measurements. We model the acquisition of a signal x0 as a series of inner products against different waveforms φ1, . . . , φm: yk = 〈φk... |

350 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
(Show Context)
Citation Context ...ompose x0 as x0 = Ψα0, where α0 has at most S non-zero components. To recover x0 from y = Φx0, we take explicit advantage of this sparsity. A variety of methods have been recently proposed to do this =-=[8,25,31,38]-=-; in this paper we will focus on ℓ1 minimization (also called basis pursuit [15]). Given the measurements y, we solve min α ‖α‖1 subject to ΦΨα = y. (1.2) In words, the above searches for the sparsest... |

325 |
The Concentration of Measure Phenomenon
- Ledoux
- 2001
(Show Context)
Citation Context ...ular value) is significantly smaller than the Frobenius norm (sum-of-squares of the singular values), Z is more tightly concentrated around its mean than (A.2) suggests. In particular, Theorem 7.3 in =-=[21]-=- shows that for all λ > 0 ( P (Z ≥ E[Z] + λ) ≤ 2 exp − λ2 16σ2 ) , (A.3) where σ 2 = sup ‖ξ‖2≤1 n∑ |〈ξ, vi〉| 2 = ‖V ‖ 2 . i=1 The “variance” σ2 is simply the largest squared singular value of V . When... |

322 |
The restricted isometry property and its implications for compressed sensing
- Candès
- 2008
(Show Context)
Citation Context ... of Φ created by extracting a certain number of columns are well-conditioned. In particular, if ρ2S := max |Γ|≤2S ‖m−1 Φ ∗ ΓΦΓ − I‖ ≤ C1, (1.11) where ‖·‖ is the standard operator norm and C1 ≤ √ 2−1 =-=[10]-=-, then we can recover the signal about as well as if we had observed the S most significant components directly. Specifically, suppose that (1.11) holds and we make noisy measurements y = Φx0 + e of a... |

302 | A simple proof of the restricted isometry property for random matrices
- Baraniuk, Davenport, et al.
- 2008
(Show Context)
Citation Context ...jor role in each of them. 1. Random waveforms. In this scenario, the shape of the measurement waveforms is random. The ensemble Φ is generated by drawing each element independently from a subgaussian =-=[6, 10, 15, 24]-=- distribution. The canonical examples are generating each entry Φi,j of the measurement ensemble as independent Gaussian random variables with zero mean and unit variance, or as independent Bernoulli ... |

218 | Sampling signals with finite rate of innovation
- Vetterli, Marziliano, et al.
- 2002
(Show Context)
Citation Context ...g rate for a sparse (“spikey”) signal can be significantly reduced by first convolving with a kernel that spreads it out is one of the central ideas of sampling signal with finite rates of innovation =-=[23,35]-=-. Here, we show that a random kernel works for any kind sparsity, and we use an entirely different reconstruction framework. In [34], numerical results are presented that demonstrate recovery of spars... |

174 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
- Donoho, Tsaig, et al.
- 2007
(Show Context)
Citation Context ...e called compressible. Sparsity is what makes it possible to recover a signal from undersampled data. Several methods for recovering sparse x0 from a limited number of measurements have been proposed =-=[12,16,29]-=-. Here we will focus on recovery via ℓ1 minimization. Given the measurements y = Φx0, we solve the convex optimization program min α ‖α‖ℓ1 subject to ΦΨα = y. (1.2) ∗ School of Electrical and Computer... |

147 | Single-pixel imaging via compressive sampling
- Duarte, Davenport, et al.
- 2008
(Show Context)
Citation Context ...ake the linear measurements in (1.1). There are architectures for CS where we have complete control over the types of measurements we make; a wellknown example of this is the “single pixel camera” of =-=[17]-=-. However, it is often the case that we are exploiting a physical principle to make indirect observations of an object of interest. There may be opportunities for injecting randomness into the acquisi... |

147 | Signal recovery from partial information via Orthogonal Matching Pursuit. Submitted to
- Tropp, Gilbert
(Show Context)
Citation Context ...e called compressible. Sparsity is what makes it possible to recover a signal from undersampled data. Several methods for recovering sparse x0 from a limited number of measurements have been proposed =-=[12,16,29]-=-. Here we will focus on recovery via ℓ1 minimization. Given the measurements y = Φx0, we solve the convex optimization program min α ‖α‖ℓ1 subject to ΦΨα = y. (1.2) ∗ School of Electrical and Computer... |

139 | Analog-to-digital converter survey and analysis
- Walden
- 1999
(Show Context)
Citation Context ...f Hx0, information which we can untangle using (1.2). Running the ADC at a slower rate gives us the benefit of taking more robust samples, as sample accuracy tends to degrade quickly with sample rate =-=[23, 43]-=-, and allows us to acquire signals with bandlimits that are beyond the reach of standard ADCs. There are two mathematical models for sampling at random locations, and they are more or less equivalent.... |

119 | On sparse reconstruction from Fourier and Gaussian measurements
- Rudelson, Vershynin
- 2008
(Show Context)
Citation Context ...pling, or S log 2 n, in the case of random pre-modulated summation, noiseless measurements. If we are willing to pay additional log factors, we can also guarantee that the recovery will be stable. In =-=[27]-=-, it is shown that if Φ is generated by taking random rows from an orthobasis, then with high probability it will obey the uniform uncertainty principle when m � µ 2 · S · log 4 n. The connection betw... |

111 | New concentration inequalities in product spaces - Talagrand |

102 |
Approximate nearest neighbors and the fast JohnsonLindenstrauss transform
- Ailon, Chazelle
- 2006
(Show Context)
Citation Context ...orm after the phase has been randomized. Although the magnitude of the Fourier transform is the same as in (b), the signal is now evenly spread out in time. 1.4. Relationship to previous research. In =-=[1]-=-, Ailon and Chazelle propose the idea of a randomized Fourier transform followed by a random projection as a “fast Johnson-Lindenstrauss transform” (FJLT). The transform is decomposed as QF Σ, where Q... |

80 | Recovery of exact sparse representations in the presense of bounded noise
- Fuchs
- 2005
(Show Context)
Citation Context ...red from y = Φα0 by solving (1.2). The proofs of Theorems 3.3 and 3.4 follow the same general outline put forth in [7], with with one important modification (Lemmas 3.5 and 3.6 below). As detailed in =-=[8,18,31]-=-, sufficient conditions for the successful recovery of a vector α0 supported on Γ with sign sequence z are that ΦΓ has full rank, where ΦΓ is the m × S matrix consisting of the columns of Φ indexed by... |

69 | Beyond Nyquist: Efficient sampling of sparse bandlimited signals
- Tropp, Laska, et al.
- 2010
(Show Context)
Citation Context ...tor (to compute the analog of the sum in (1.6)) and an ADC taking equally spaced samples at a rate much smaller than Nyquist. This acquisition strategy is analyzed (without the convolution with h) in =-=[33]-=-, where it is shown to be effective for capturing spectrally sparse signals. In Section 2.2, we will also see how RPMS can be used in certain imaging applications. The recovery bounds we derive for ra... |

63 |
Random filters for compressive sampling and reconstruction
- Tropp, Wakin, et al.
- 2006
(Show Context)
Citation Context ...central ideas of sampling signal with finite rates of innovation [23,35]. Here, we show that a random kernel works for any kind sparsity, and we use an entirely different reconstruction framework. In =-=[34]-=-, numerical results are presented that demonstrate recovery of sparse signals (using orthogonal matching pursuit instead of ℓ1 minimization) from a small number of samples of the output of a finite le... |

59 | Toeplitz-structured compressed sensing matrices
- Haupt, Raz, et al.
(Show Context)
Citation Context ... output of a finite length “random filter”. In this paper, we approach things from a more theoretical perspective, deriving bounds on the number of samples need to guarantee sparse reconstruction. In =-=[4]-=-, random convolution is explored in a slightly different context than in this paper. Here, the sensing matrix Φ consists of random selected rows (or modulated sums) of a Toeplitz matrix; in [4], the s... |

59 | High-resolution radar via compressed sensing
- Herman, Strohmer
(Show Context)
Citation Context ... in radar imaging can be found in [5]. There has also been some recent work on implementing low-cost radars which use random waveforms [2, 3] and traditional image reconstruction techniques. Also, in =-=[19]-=-, it is shown that compressive sensing can be used to super-resolve point targets when the radar sends out an incoherent pulse. 2.2. Fourier optics. Convolution with a pulse of our choosing can also b... |

47 |
A tomographic formulation of spotlight-mode synthetic aperture radar
- Munson, O’Brien, et al.
- 1983
(Show Context)
Citation Context ...truction from samples of a signal convolved with a known pulse is fundamental in radar imaging. Figure 2.1 illustrates how a scene is measured in spotlight synthetic aperture radar (SAR) imaging (see =-=[25]-=- for a more indepth discussion). The radar, located at point p1, focusses its antenna on the region of interest with reflectivity function I(x1, x2) whose center is at orientation θ relative to p1, an... |

46 | Compressive radar imaging
- Baraniuk, Steeghs
(Show Context)
Citation Context ...s of a high-bandwidth pulse without paying the cost of an ADC operating at a comparably fast rate. A preliminary investigation of using ideas from compressive sensing in radar imaging can be found in =-=[5]-=-. There has also been some recent work on implementing low-cost radars which use random waveforms [2, 3] and traditional image reconstruction techniques. Also, in [19], it is shown that compressive se... |

44 | Toeplitz Compressed Sensing Matrices With Applications to Sparse Channel Estimation
- Haupt, Bajwa, et al.
- 2010
(Show Context)
Citation Context ...om variables. That paper also demonstrates the trade-off between the number of taps in the filter and the amount of subsampling that can be endured for signals which are sparse in the time domain. In =-=[4,19]-=-, theoretical bounds are developed for a slightly different random convolution model. There, the sensing matrix Φ consists of m consecutive rows of a random Toeplitz matrix, corresponding to convoluti... |

37 | Analog-to-information conversion via random demodulation
- Kirolos, Laska, et al.
- 2006
(Show Context)
Citation Context ...g schemes, which are discussed in detail in Section 1.2. In the first, we simply sample the convolution at a small number of locations, in the second we apply the “random demodulator” architecture of =-=[22, 40]-=-. This sensing strategy has the following attractive properties: Universality. A measurement strategy is universal if it is agnostic towards the choice of signal representation. This is the case when ... |

35 | Reconstruction and subgaussian operators in asymptotic geometric analysis
- Mendelson, Pajor, et al.
- 2006
(Show Context)
Citation Context ...jor role in each of them. 1. Random waveforms. In this scenario, the shape of the measurement waveforms is random. The ensemble Φ is generated by drawing each element independently from a subgaussian =-=[6, 10, 15, 24]-=- distribution. The canonical examples are generating each entry Φi,j of the measurement ensemble as independent Gaussian random variables with zero mean and unit variance, or as independent Bernoulli ... |

33 |
Sparsity and Incoherence
- Candes, Romberg
- 2007
(Show Context)
Citation Context ...on. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly =-=[7, 8, 10, 15, 32]-=-: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that traditional analog-to-d... |

33 | Concentration inequalities
- Boucheron, Bousquet, et al.
- 2004
(Show Context)
Citation Context ...> λ) ≤ 2 exp − λ2 2(σ2 + Bλ/3) ) , (A.2) where the variance σ2 is given by σ2 = ∑n 2 k=1 E[Yk ]. A nice derivation of both the Hoeffding and Bernstein inequalities can be found in the introduction of =-=[9]-=-. Lemma 3.7 depends on bounding a sum of random variables of the form ∑ Z = ɛiɛjvij ∣ ∣ i,j where the ɛi are independent ±1 Bernoulli random variables and the vij, i, j = 1, . . . , n are real numbers... |

32 | Circulant and Toeplitz matrices in compressed sensing
- Rauhut
- 2009
(Show Context)
Citation Context ...onding to convolution followed by a small number of consecutive samples. It is shown that recovery is guaranteed when m � S2 log n. Very recently, these results were improved in two different ways in =-=[33]-=-: the bound was tightened to m � S log 3 n, and was shown to hold for arbitrary (but fixed) sets of samples whose size obeys this bound. The sensing model used in this paper is different than in both ... |

32 |
Recovery of short, complex linear combinations via ℓ1 minimization
- Tropp
- 2004
(Show Context)
Citation Context ...forth in [11] and [40], but with a few important modifications (namely Lemmas 3.4 and 3.7 below) that take advantage of the refined notions of coherence discussed in Section 3.1 above. As detailed in =-=[12,18,39]-=-, sufficient conditions for the successful recovery of a vector α0 supported on Γ with sign sequence z are that ΦΓ has full rank, where ΦΓ is the m × S matrix consisting of the columns of Φ indexed by... |

25 | Compressive coded aperture superresolution image reconstruction
- Marcia, Willett
- 2008
(Show Context)
Citation Context ...dulations of the same waveform. 2.2. Super-resolved imaging. Convolution with a random pulse can also be used to overcome the resolution limitations of an optical imaging system (see also the work in =-=[27, 28]-=-). We consider the scenario where the object we want to image is illuminated by a monochromatic coherent light source; one might think of the object of interest as a pattern on a transparency, and the... |

25 | Identification of matrices having a sparse representation
- Pfander, Rauhut, et al.
(Show Context)
Citation Context ...age reconstruction techniques. In [20], it is shown that compressive sensing can be used to super-resolve point targets when the radar sends out an incoherent pulse. Related work can also be found in =-=[32]-=-, which demonstrates how this type of problem is related to sparse 10I(x1,x2) lr,θ r Rθ(r) p1 θ Fig. 2.1. Geometry for the “spotlight SAR” imaging problem. The return signal from a pulse h(t) emitted... |

21 |
The Theory of Probabilities. Gastehizdat
- Bernstein
- 1946
(Show Context)
Citation Context ... ≤ 2 exp − λ2 2‖a‖ 2 2 ) , (A.1) for every λ > 0. For random variables that have much smaller standard deviation than maximum value, a slightly more refined bound is given by the Bernstein inequality =-=[7]-=-. If |Yk| ≤ B, then ( P (Z > λ) ≤ 2 exp − λ2 2(σ2 + Bλ/3) ) , (A.2) where the variance σ2 is given by σ2 = ∑n 2 k=1 E[Yk ]. A nice derivation of both the Hoeffding and Bernstein inequalities can be fo... |

18 |
signal recovery from incomplete and inaccurate measurements
- Stable
(Show Context)
Citation Context ...arse signal from on the order of S log n samples of its convolution with a random waveform. If we are willing to pay additional log factors, we can also guarantee that the recovery will be stable. In =-=[13]-=-, it is shown that stable recovery of sparse signals follows from Φ having the restricted isometry property, which was first introduced in [14]. This property says that all submatrices of Φ created by... |

14 | Compressive coded aperture imaging
- Marcia, Harmany, et al.
- 2009
(Show Context)
Citation Context ...dulations of the same waveform. 2.2. Super-resolved imaging. Convolution with a random pulse can also be used to overcome the resolution limitations of an optical imaging system (see also the work in =-=[27, 28]-=-). We consider the scenario where the object we want to image is illuminated by a monochromatic coherent light source; one might think of the object of interest as a pattern on a transparency, and the... |

12 | de la Peña and E. Giné. Decoupling: From Dependence to Independence - H - 1999 |

12 |
Analog-to-digital converters
- Le, Rondeau, et al.
(Show Context)
Citation Context ...f Hx0, information which we can untangle using (1.2). Running the ADC at a slower rate gives us the benefit of taking more robust samples, as sample accuracy tends to degrade quickly with sample rate =-=[23, 43]-=-, and allows us to acquire signals with bandlimits that are beyond the reach of standard ADCs. There are two mathematical models for sampling at random locations, and they are more or less equivalent.... |

10 |
Probability in Banach spaces, vol. 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3
- Ledoux, Talagrand
- 1991
(Show Context)
Citation Context ...· · ɛn] T . The second moment of Z is easily computed E[Z 2 ] = ∑ ∑ E[ɛk1ɛk2]〈vk1, vk2〉 k1 = ∑ k k2 = ‖V ‖ 2 F , ‖vk‖ 2 2 where ‖ · ‖F is the Frobenius norm. In fact, the Khintchine-Kahane inequality =-=[22]-=- allows us to bound all of the moments in terms of the second moment; for every q ≥ 2 ∥ 2 E[Z q ] ≤ C · q q/2 · (E[Z 2 ]) 1/2 , where C ≤ 21/4 . Using the Markov inequality, we have P (Z > λ) ≤ E[Zq ]... |

9 |
Norms of random submatrices and sparse
- Tropp
- 2008
(Show Context)
Citation Context ...herence, our quantity of interest will be ν := ν(Γ) = max k=1,...,n ‖rk ‖2 (3.3) where the r k are rows in the matrix HΨΓ; we will call ν(Γ) the cumulative coherence Γ (this quantity was also used in =-=[37]-=-). We will show below that we can bound the number of samples needed to recover (with high probability) a signal on Γ by m � ν 2 · log n. Since we always have the bound ν ≤ µ √ S, the result (1.5) is ... |

3 |
of short, complex linear combinations via ℓ1-minimization
- “Recovery
- 2005
(Show Context)
Citation Context ...red from y = Φα0 by solving (1.2). The proofs of Theorems 3.3 and 3.4 follow the same general outline put forth in [7], with with one important modification (Lemmas 3.5 and 3.6 below). As detailed in =-=[8,18,31]-=-, sufficient conditions for the successful recovery of a vector α0 supported on Γ with sign sequence z are that ΦΓ has full rank, where ΦΓ is the m × S matrix consisting of the columns of Φ indexed by... |

3 |
Iterative hard thresholding for compressed sensing, Submitted manuscript
- Blumensath, Davies
- 2008
(Show Context)
Citation Context ...ompose x0 as x0 = Ψα0, where α0 has at most S non-zero components. To recover x0 from y = Φx0, we take explicit advantage of this sparsity. A variety of methods have been recently proposed to do this =-=[8,25,31,38]-=-; in this paper we will focus on ℓ1 minimization (also called basis pursuit [15]). Given the measurements y, we solve min α ‖α‖1 subject to ΦΨα = y. (1.2) In words, the above searches for the sparsest... |

1 |
Axelsson, Analysis of random step frequency radar and comparison with experiments
- J
(Show Context)
Citation Context ...preliminary investigation of using ideas from compressive sensing in radar imaging can be found in [5]. There has also been some recent work on implementing low-cost radars which use random waveforms =-=[2, 3]-=- and traditional image reconstruction techniques. Also, in [19], it is shown that compressive sensing can be used to super-resolve point targets when the radar sends out an incoherent pulse. 2.2. Four... |

1 |
noise radar/sodar with ultrawideband waveforms
- Random
(Show Context)
Citation Context ...preliminary investigation of using ideas from compressive sensing in radar imaging can be found in [5]. There has also been some recent work on implementing low-cost radars which use random waveforms =-=[2, 3]-=- and traditional image reconstruction techniques. Also, in [19], it is shown that compressive sensing can be used to super-resolve point targets when the radar sends out an incoherent pulse. 2.2. Four... |

1 |
Channel estimation and synchornization with sub-nyquist sampling and application to ultra-wideband systems
- Maravic, Vetterli, et al.
- 2004
(Show Context)
Citation Context ...g rate for a sparse (“spikey”) signal can be significantly reduced by first convolving with a kernel that spreads it out is one of the central ideas of sampling signal with finite rates of innovation =-=[23,35]-=-. Here, we show that a random kernel works for any kind sparsity, and we use an entirely different reconstruction framework. In [34], numerical results are presented that demonstrate recovery of spars... |