Results 1  10
of
125
The benefit of group sparsity
, 2009
"... This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly groupsparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly groupsparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies. 1
Learning with Structured Sparsity
"... This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A gener ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. Experiments demonstrate the advantage of structured sparsity over standard sparsity. 1.
Bayesian Compressed Sensing via Belief Propagation
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses ( log ()) measurements and ( log 2 ()) computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
Exploiting structure in waveletbased Bayesian compressive sensing
, 2009
"... Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many stateoftheart compressivesensing inversion algorithms.
Kalman filtered compressed sensing
 in Proc. IEEE Int. Conf. Image (ICIP), 2008
"... We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and timevarying sparsity patterns) from a limited number of linear “incoherent ” measurements, in realtime. The signals are sparse in some transform domain referred to as the sparsity basis. For a si ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and timevarying sparsity patterns) from a limited number of linear “incoherent ” measurements, in realtime. The signals are sparse in some transform domain referred to as the sparsity basis. For a single spatial signal, the solution is provided by Compressed Sensing (CS). The question that we address is, for a sequence of sparse signals, can we do better than CS, if (a) the sparsity pattern of the signal’s transform coefficients’ vector changes slowly over time, and (b) a simple prior model on the temporal dynamics of its current nonzero elements is available. The overall idea of our solution is to use CS to estimate the support set of the initial signal’s transform vector. At future times, run a reduced order Kalman filter with the currently estimated support and estimate new additions to the support set by applying CS to the Kalman innovations or filtering error (whenever it is “large”). Index Terms/Keywords: compressed sensing, Kalman filtering, compressive sampling, sequential MMSE estimation
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 37 (24 self)
 Add to MetaCart
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
A unified Bayesian framework for MEG/EEG source imaging
 Neuroimage
, 2009
"... The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicit ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include weighted minimum ℓ2norm, FOCUSS, MCE, VESTAL, sLORETA, ReML and covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination (ARD). Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant. I.
LowDimensional Models for Dimensionality Reduction and Signal Recovery: A Geometric Perspective
, 2009
"... We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal model ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal models, point clouds, and manifold signal models. Each model has a particular geometrical structure that enables signal information in to be stably preserved via a simple linear and nonadaptive projection to a much lower dimensional space whose dimension either is independent of the ambient dimension at best or grows logarithmically with it at worst. As a bonus, we point out a common misconception related to probabilistic compressible signal models, that is, that the generalized Gaussian and Laplacian random models do not support stable linear dimensionality reduction.
Compressive Estimation of Doubly Selective Channels: Exploiting Channel Sparsity to Improve Spectral Efficiency in Multicarrier Transmissions
"... We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDopple ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDoppler sparsity, CSbased channel estimation allows an increase in spectral efficiency through a reduction of the number of pilot symbols that have to be transmitted. We also present an extension of our basic channel estimator that employs a sparsityimproving basis expansion. We propose a framework for optimizing the basis and an iterative approximate basis optimization algorithm. Simulation results using three different CS recovery algorithms demonstrate significant performance gains (in terms of improved estimation accuracy or reduction of the number of pilots) relative to conventional leastsquares estimation, as well as substantial advantages of using an optimized basis.
Structured compressed sensing: From theory to applications
 IEEE Trans. Signal Process
, 2011
"... Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications. Index Terms—Approximation algorithms, compressed sensing, compression algorithms, data acquisition, data compression, sampling methods. I.