Near-Oracle Performance of Greedy Block-Sparse Estimation Techniques from Noisy Measurements (2010)
Cached
Download Links
Citations: | 17 - 2 self |
Citations
3581 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...as been devoted to the sparse representation model, which stems from the observation that many signals can be approximated using a small number of elements, or “atoms,” chosen from a large dictionary =-=[1]-=-, [2]. Thus, we may write , where the signal is a linear combination of a small number of columns of the dictionary matrix , corrupted by noise . Since only a small number of elements of are required ... |
1566 |
Fundamentals of Statistical Signal Processing: Estimation Theory
- Kay
- 1993
(Show Context)
Citation Context ...in assessing the quality of an estimator is to check its proximity to the best possible performance in the given setting. To this end, it is common practice to compute the CRB for unbiased estimators =-=[31]-=-, i.e., those techniques for which the bias equals zero. The CRB is a lower bound on the mean-squared error for any unbiased estimator . 1 The remaining terms in (23) are always no worse than the corr... |
1540 | Handbook of mathematical functions with formulas, graphs, and mathematical tables, vol. 55 of National Bureau of Standards Applied Mathematics Series, For sale by the Superintendent of Documents, U.S. Government Printing Office - Abramowitz, Stegun - 1964 |
1375 | Stable signal recovery from incomplete and inaccurate measurements
- Candes, Romberg, et al.
(Show Context)
Citation Context ...en devoted to the sparse representation model, which stems from the observation that many signals can be approximated using a small number of elements, or “atoms,” chosen from a large dictionary [1], =-=[2]-=-. Thus, we may write , where the signal is a linear combination of a small number of columns of the dictionary matrix , corrupted by noise . Since only a small number of elements of are required for t... |
1142 | Model selection and estimation in regression with grouped variables
- Yuan, Lin
- 2006
(Show Context)
Citation Context ...model. Apart from ordinary sparsity, unions of subspaces have been applied to estimate signals as diverse as pulse streams [9]–[12], multi-band communications [13]–[15], and block sparse vectors [8], =-=[16]-=-–[19], the latter being the focus of this paper. The common thread running through these applications is the ability to exploit the union of subspaces structure in order to achieve accurate reconstruc... |
861 | The Dantzig selector: Statistical estimation when p is much larger than
- Candes, Tao
- 2007
(Show Context)
Citation Context ...re relatively weak, ensuring only that the error in is on the order of [2]–[4]. By contrast, when the noise is random, estimation performance is considerably improved for most noise realizations [3], =-=[21]-=-–[23]. Even better performance can be obtained in the Bayesian case, in which itself is random with a known distribution [24]–[26]. This paper contributes to the ongoing effort to extend such performa... |
694 | Model-based compressive sensing - Baraniuk, Cevher, et al. |
631 | Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition
- Pati, Rezaiifar, et al.
- 1993
(Show Context)
Citation Context ...given by (16)BEN-HAIM AND ELDAR: NEAR-ORACLE PERFORMANCE OF GREEDY BLOCK-SPARSE ESTIMATION TECHNIQUES 1035 b) Block Orthogonal Matching Pursuit (BOMP): The BOMP algorithm, based on the OMP algorithm =-=[30]-=-, was independently proposed in [20] and [18]. Given a measurement vector , perform the following steps: 1) Define . 2) For each , do the following: a) Set b) Set (17) (18) c) Set . 3) The estimate is... |
595 | Image denoising via sparse and redundant representations over learned dictionaries - Elad, Aharon |
473 | Just relax: Convex programming methods for identifying sparse signals in noise
- Tropp
- 2006
(Show Context)
Citation Context ...CA 94305 USA (e-mail: yonina@ee.technion.ac.il). Digital Object Identifier 10.1109/JSTSP.2011.2160250 in many fundamental fields of signal processing, including compressed sensing [1], [2], denoising =-=[3]-=-, deblurring [5], and interpolation [6]. The assumption of sparsity is an example of a much more general class of signal models which can be described as a union of subspaces [7], [8]. Indeed, each su... |
419 | From sparse solutions of systems of equations to sparse modeling of signals and images - Bruckstein, Donoho, et al. - 2009 |
359 | Algorithms for simultaneous sparse approximation—part I: greedy pursuit
- Tropp, Gilbert, et al.
- 2006
(Show Context)
Citation Context ...rsion of the CoSaMP approach under adversarial noise [19]. Some research has focused on the related problem in which the same dictionary is used to obtain measurements of a series of distinct signals =-=[29]-=-. The performance of BOMP under Gaussian noise was first addressed by Lozano et al. [20], who provided a bound on the estimation error, i.e., the maximum componentwise error, in the presence of Gaussi... |
338 | Sampling signals with finite rate of innovation
- Vetterli, Marziliano, et al.
(Show Context)
Citation Context ...ons of subspaces are proving to be a powerful generalization of the sparsity model. Apart from ordinary sparsity, unions of subspaces have been applied to estimate signals as diverse as pulse streams =-=[9]-=-–[12], multi-band communications [13]–[15], and block sparse vectors [8], [16]–[19], the latter being the focus of this paper. The common thread running through these applications is the ability to ex... |
272 | Consistency of the group lasso and multiple kernel learning
- Bach
- 2008
(Show Context)
Citation Context ...ng effort to extend such performance guarantees to the block sparse setting. Such guarantees are already available in the context of Group Lasso in the random [27] and adversarial noise settings [8], =-=[28]-=-. Performance assurances were also derived for a block version of the CoSaMP approach under adversarial noise [19]. Some research has focused on the related problem in which the same dictionary is use... |
207 | Robust recovery of signals from a structured union of subspaces
- Eldar
(Show Context)
Citation Context ..., [2], denoising [3], deblurring [5], and interpolation [6]. The assumption of sparsity is an example of a much more general class of signal models which can be described as a union of subspaces [7], =-=[8]-=-. Indeed, each support pattern defines a subspace of the space of possible parameter vectors. Saying that the parameter contains no more than nonzero entries is equivalent to stating that belongs to t... |
170 |
Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
- 2006
(Show Context)
Citation Context ...It turns out that the sparsity assumption can be used to accurately estimate from , even when the number of possible atoms (and thus, the length of ) is greater than the number of measurements in [2]–=-=[4]-=-. This model has been used to great advantage Manuscript received September 05, 2010; revised February 14, 2011; accepted June 09, 2011. Date of publication June 23, 2011; date of current version Augu... |
152 | Block-sparsity signals: uncertainty relations and efficient recovery
- Eldar, Kuppinger, et al.
- 2010
(Show Context)
Citation Context ...estimators designed for the ordinary sparsity model can be readily adapted to the block sparse setting. Previous work has described techniques such as Group Lasso [16], also known as L-OPT [8], [17], =-=[18]-=-, block orthogonal matching pursuit (BOMP) [18], [20], and a block version of the CoSaMP algorithm [19]. In this paper, we also consider a block-sparse version of the thresholding algorithm, which we ... |
147 | Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang–Fix - Dragotti, Vetterli, et al. |
141 | From theory to practice: Sub-nyquist sampling of sparse wideband analog signals
- Mishali, Eldar
- 2010
(Show Context)
Citation Context ... powerful generalization of the sparsity model. Apart from ordinary sparsity, unions of subspaces have been applied to estimate signals as diverse as pulse streams [9]–[12], multi-band communications =-=[13]-=-–[15], and block sparse vectors [8], [16]–[19], the latter being the focus of this paper. The common thread running through these applications is the ability to exploit the union of subspaces structur... |
129 |
On the reconstruction of blocksparse signals with an optimal number of measurements
- Stojnic, Parvaresh, et al.
- 2009
(Show Context)
Citation Context ...tely, estimators designed for the ordinary sparsity model can be readily adapted to the block sparse setting. Previous work has described techniques such as Group Lasso [16], also known as L-OPT [8], =-=[17]-=-, [18], block orthogonal matching pursuit (BOMP) [18], [20], and a block version of the CoSaMP algorithm [19]. In this paper, we also consider a block-sparse version of the thresholding algorithm, whi... |
104 |
A theory for sampling signals from a union of subspaces
- Lu, Do
- 2008
(Show Context)
Citation Context ...g [1], [2], denoising [3], deblurring [5], and interpolation [6]. The assumption of sparsity is an example of a much more general class of signal models which can be described as a union of subspaces =-=[7]-=-, [8]. Indeed, each support pattern defines a subspace of the space of possible parameter vectors. Saying that the parameter contains no more than nonzero entries is equivalent to stating that belongs... |
101 | Sampling theorems for signals from the union of finite-dimensional linear subspaces - Blumensath, Davies |
98 | Blind multiband signal reconstruction: Compressed sensing for analog signals - Mishali, Eldar - 2009 |
97 | Average case analysis of multichannel sparse recovery using convex relaxation - Eldar, Rauhut |
92 |
The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities
- Anderson
- 1955
(Show Context)
Citation Context ...n . Observe that, for a deterministic value , the pdf defines a Gaussian random vector whose mean depends linearly on , but whose covariance is constant in . Therefore, using a result due to Anderson =-=[39]-=-, it follows that is a non-increasing function of . Next, denoting by the marginal pdf of ,wehave (83) Of the two bounds provided in (88), the first is somewhat tighter, but obviously more cumbersome.... |
88 |
Lower bounds for parametric estimation with constraints
- Gorman, Hero
- 1990
(Show Context)
Citation Context ...d estimator . 1 The remaining terms in (23) are always no worse than the corresponding terms in (25). To utilize the information inherent in the block sparsity structure, we apply the constrained CRB =-=[32]-=-–[35] to the present setting. In the constrained estimation scenario, one often seeks estimators which are unbiased for all parameter values in the constraint set [32], [33]. However, as we will see b... |
65 | Xampling: Analog to digital at sub-Nyquist rates
- Mishali, Eldar
(Show Context)
Citation Context ...rful generalization of the sparsity model. Apart from ordinary sparsity, unions of subspaces have been applied to estimate signals as diverse as pulse streams [9]–[12], multi-band communications [13]–=-=[15]-=-, and block sparse vectors [8], [16]–[19], the latter being the focus of this paper. The common thread running through these applications is the ability to exploit the union of subspaces structure in ... |
64 |
On the Cramer-Rao bound under parametric constraints
- Stoica, Boon
- 1998
(Show Context)
Citation Context ... apply the constrained CRB [32]–[35] to the present setting. In the constrained estimation scenario, one often seeks estimators which are unbiased for all parameter values in the constraint set [32], =-=[33]-=-. However, as we will see below, this requirement is too strict in the block sparse setting. Indeed, in Theorem 3 we show that it is not possible to construct any method which is unbiased for all feas... |
56 | Inpainting and zooming using sparse representations
- Fadili, Starck, et al.
- 2009
(Show Context)
Citation Context ...n.ac.il). Digital Object Identifier 10.1109/JSTSP.2011.2160250 in many fundamental fields of signal processing, including compressed sensing [1], [2], denoising [3], deblurring [5], and interpolation =-=[6]-=-. The assumption of sparsity is an example of a much more general class of signal models which can be described as a union of subspaces [7], [8]. Indeed, each support pattern defines a subspace of the... |
42 | Multichannel sampling of pulse streams at the rate of innovation
- Gedalyahu, Tur, et al.
- 2011
(Show Context)
Citation Context ...of subspaces are proving to be a powerful generalization of the sparsity model. Apart from ordinary sparsity, unions of subspaces have been applied to estimate signals as diverse as pulse streams [9]–=-=[12]-=-, multi-band communications [13]–[15], and block sparse vectors [8], [16]–[19], the latter being the focus of this paper. The common thread running through these applications is the ability to exploit... |
40 | R.S.: Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem - Feingold, Varga - 1962 |
37 | Coherence-based performance guarantees for estimating a sparse vector under random noise
- Ben-Haim, Eldar, et al.
(Show Context)
Citation Context ...t factor of the CRB at high SNR, for dictionaries satisfying suitable requirements. Once again, when each block contains one element, we can recover previously known guarantees for non-block sparsity =-=[22]-=- from our results. Comparing our results with those available for ordinary or “scalar” sparsity reveals the conditions under which the block sparse approach is beneficial. Our results are a function o... |
26 | Some theoretical results on the grouped variables Lasso.
- Chesneau, Hebiri
- 2008
(Show Context)
Citation Context ...[26]. This paper contributes to the ongoing effort to extend such performance guarantees to the block sparse setting. Such guarantees are already available in the context of Group Lasso in the random =-=[27]-=- and adversarial noise settings [8], [28]. Performance assurances were also derived for a block version of the CoSaMP approach under adversarial noise [19]. Some research has focused on the related pr... |
26 | The Cramér–Rao bound for estimating a sparse parameter vector
- Ben-Haim, Eldar
- 2010
(Show Context)
Citation Context ...imator . 1 The remaining terms in (23) are always no worse than the corresponding terms in (25). To utilize the information inherent in the block sparsity structure, we apply the constrained CRB [32]–=-=[35]-=- to the present setting. In the constrained estimation scenario, one often seeks estimators which are unbiased for all parameter values in the constraint set [32], [33]. However, as we will see below,... |
25 | Average performance analysis for thresholding
- Schnass, Vandergheynst
- 2007
(Show Context)
Citation Context ...tion performance is considerably improved for most noise realizations [3], [21]–[23]. Even better performance can be obtained in the Bayesian case, in which itself is random with a known distribution =-=[24]-=-–[26]. This paper contributes to the ongoing effort to extend such performance guarantees to the block sparse setting. Such guarantees are already available in the context of Group Lasso in the random... |
20 | Time delay estimation from low rate samples: A union of subspaces approach - Gedalyahu, Eldar |
20 | Grouped orthogonal matching pursuit for variable selection and prediction
- Lozano, Swirszcz, et al.
(Show Context)
Citation Context ...an be readily adapted to the block sparse setting. Previous work has described techniques such as Group Lasso [16], also known as L-OPT [8], [17], [18], block orthogonal matching pursuit (BOMP) [18], =-=[20]-=-, and a block version of the CoSaMP algorithm [19]. In this paper, we also consider a block-sparse version of the thresholding algorithm, which we refer to as block-thresholding (BTH). The BOMP and BT... |
20 | RIP-based near-oracle performance guarantees for SP
- Giryes, Elad
(Show Context)
Citation Context ...latively weak, ensuring only that the error in is on the order of [2]–[4]. By contrast, when the noise is random, estimation performance is considerably improved for most noise realizations [3], [21]–=-=[23]-=-. Even better performance can be obtained in the Bayesian case, in which itself is random with a known distribution [24]–[26]. This paper contributes to the ongoing effort to extend such performance g... |
17 | multi-band signal reconstruction: Compressed sensing for analog signals - “Blind - 2007 |
14 | deconvolution of images using optimal sparse representations - Bronstein, Bronstein, et al. - 2005 |
13 | On the constrained Cramér–Rao bound with a singular Fisher information matrix - Ben-Haim, Eldar - 2009 |
13 |
Rectangular confidence regions for the means of multivariate normal distributions
- ˇSidák
- 1967
(Show Context)
Citation Context ...heorem. APPENDIX C PROOFS FOR GAUSSIAN NOISE We begin with two lemmas which prove some useful properties of the Gaussian distribution. The first of these is a generalization of a result due to ˇSidák =-=[38]-=-. Lemma 3: Let be a set of jointly Gaussian random vectors. Suppose that for all , but that the covariances of the vectors are unspecified and that the vectors are not necessarily independent. We then... |
3 | Coherence-based near-oracle performance guarantees for sparse estimation under Gaussian noise - Ben-Haim, Eldar, et al. - 2010 |
2 |
Near-ideal model selection by ` minimization
- Candès, Plan
- 2009
(Show Context)
Citation Context ...performance is considerably improved for most noise realizations [3], [21]–[23]. Even better performance can be obtained in the Bayesian case, in which itself is random with a known distribution [24]–=-=[26]-=-. This paper contributes to the ongoing effort to extend such performance guarantees to the block sparse setting. Such guarantees are already available in the context of Group Lasso in the random [27]... |
1 | Cramér–Rao bound for estimating a sparse parameter vector - “The - 2010 |