Results 1  10
of
16
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
 IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract

Cited by 59 (15 self)
 Add to MetaCart
(Show Context)
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less rowsparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
Compressive MUSIC: revisiting the link between compressive sensing and array signal processing
 IEEE Trans. on Information Theory
, 2012
"... Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sen ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimalbound with finite number of snapshots even in cases where the signals are linearly dependent. Index Terms—Compressive sensing, multiple measurement vector problem, joint sparsity, MUSIC, SOMP, thresholding. I.
Sparse signal processing with linear and nonlinear observations: A unified Shannontheoretic approach,” arXiv preprint arXiv:1304.0682
, 2013
"... In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized i ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of N variables X1, X2,..., XN, and there is an unknown subset of variables S ⊂ {1, 2,..., N} that are relevant for predicting outcomes/outputs Y. In other words, when Y is conditioned on {Xn}n∈S it is conditionally independent of the other variables, {Xn}n 6∈S. Our goal is to identify the set S from samples of the variables X and the associated outcomes Y. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and nonlinear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions in order to demonstrate the applicability and flexibility of our results in general sparse signal processing models. 1
Compressive MUSIC: a missing link between compressive sensing and array signal processing
, 2011
"... ..."
(Show Context)
Performance bounds for sparsity pattern recovery with quantized noisy random projections
 IEEE Journal of Selected Topics in Signal Processing, Special Issue on Robust Measures and Tests Using Sparse Data for Detection and Estimation
, 2012
"... In this paper, we study the performance limits of recovering the support of a sparse signal based on quantized noisy random projections. Although the problem of support recovery of sparse signals with real valued noisy projections with different types of projection matrices has been addressed by sev ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we study the performance limits of recovering the support of a sparse signal based on quantized noisy random projections. Although the problem of support recovery of sparse signals with real valued noisy projections with different types of projection matrices has been addressed by several authors in the recent literature, very few attempts have been made for the same problem with quantized compressive measurements. In this paper, we derive performance limits of support recovery of sparse signals when the quantized noisy corrupted compressive measurements are sent to the decoder over additive white Gaussian noise channels. The sufficient conditions which ensure the perfect recovery of sparsity pattern of a sparse signal from coarsely quantized noisy random projections are derived when the maximum likelihood decoder is used. More specifically, we find the relationships among the parameters, namely the signal dimension N, the sparsity index K, the number of noisy projections M, the number of quantization levels L, and measurement signaltonoise ratio which ensure the asymptotic reliable recovery of the support of sparse signals when the entries of the measurement matrix are drawn from a Gaussian ensemble. I.
Subspace Recovery From Structured Union of Subspaces
"... Abstract — Lower dimensional signal representation schemes frequently assume that the signal of interest lies in a single vector space. In the context of the recently developed theory of compressive sensing, it is often assumed that the signal of interest is sparse in an orthonormal basis. However, ..."
Abstract
 Add to MetaCart
Abstract — Lower dimensional signal representation schemes frequently assume that the signal of interest lies in a single vector space. In the context of the recently developed theory of compressive sensing, it is often assumed that the signal of interest is sparse in an orthonormal basis. However, in many practical applications, this requirement may be too restrictive. A generalization of the standard sparsity assumption is that the signal lies in a union of subspaces. Recovery of such signals from a small number of samples has been studied recently in several works. Here, we consider the problem of only subspace recovery in which our goal is to identify the subspace (from the union) in which the signal lies using a small number of samples, in the presence of noise. More specifically, we derive performance bounds and conditions under which reliable subspace recovery is guaranteed using maximum likelihood (ML) estimation. We begin by treating general unions and then obtain the results for the special case in which the subspaces have structure leading to block sparsity. In our analysis, we treat both general sampling operators and random sampling matrices. With general unions, we show that under certain conditions, the number of measurements required for reliable subspace recovery in the presence of noise via ML is less than that implied using the restricted isometry property, which guarantees complete signal recovery. In the special case of block sparse signals, we quantify the gain achievable over standard sparsity in subspace recovery. Our results also strengthen existing results on sparse support recovery in the presence of noise under the standard sparsity model. Index Terms — Maximum likelihood estimation, union of linear subspaces, subspace recovery, compressive sensing, block sparsity. I.
1 1 Measurement Matrix Design for Compressive Sensing Based MIMO Radar 1
"... ar ..."
(Show Context)