Results 1  10
of
330
Learning with Structured Sparsity
"... This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A gener ..."
Abstract

Cited by 127 (15 self)
 Add to MetaCart
(Show Context)
This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. Experiments demonstrate the advantage of structured sparsity over standard sparsity. 1.
Bayesian compressive sensing via belief propagation
 IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 125 (19 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a lengthN signal containing K large coefficients, our CSBP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CSBP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1
CurveletWavelet Regularized Split Bregman Iteration for Compressed Sensing
"... Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent pointlike features while curvelets represent linelike features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + ɛ with the measurement matrix Φ ∈ R K×N, K<<N, we consider a jointly sparsityconstrained optimization problem of the form argmin{‖ΛcΨcu‖1 + ‖ΛwΨwu‖1 + u 1 2‖f − Φu‖22}. Here Ψcand Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the DouglasRachford Split algorithm. Numerical experiments for compressed sensing based Fourierdomain random imaging show good performances of the proposed curveletwavelet regularized split Bregman (CWSpB) methods,whereweparticularlyuseacombination of wavelet and curvelet coefficients as sparsity constraints.
The benefit of group sparsity
, 2009
"... This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly groupsparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying ..."
Abstract

Cited by 118 (12 self)
 Add to MetaCart
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly groupsparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies. 1
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 104 (16 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 92 (34 self)
 Add to MetaCart
(Show Context)
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
Exploiting structure in waveletbased Bayesian compressive sensing
, 2009
"... Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a ..."
Abstract

Cited by 91 (14 self)
 Add to MetaCart
(Show Context)
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many stateoftheart compressivesensing inversion algorithms.
Kalman filtered compressed sensing
 in Proc. IEEE Int. Conf. Image (ICIP), 2008
"... We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and timevarying sparsity patterns) from a limited number of linear “incoherent ” measurements, in realtime. The signals are sparse in some transform domain referred to as the sparsity basis. For a si ..."
Abstract

Cited by 90 (19 self)
 Add to MetaCart
(Show Context)
We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and timevarying sparsity patterns) from a limited number of linear “incoherent ” measurements, in realtime. The signals are sparse in some transform domain referred to as the sparsity basis. For a single spatial signal, the solution is provided by Compressed Sensing (CS). The question that we address is, for a sequence of sparse signals, can we do better than CS, if (a) the sparsity pattern of the signal’s transform coefficients’ vector changes slowly over time, and (b) a simple prior model on the temporal dynamics of its current nonzero elements is available. The overall idea of our solution is to use CS to estimate the support set of the initial signal’s transform vector. At future times, run a reduced order Kalman filter with the currently estimated support and estimate new additions to the support set by applying CS to the Kalman innovations or filtering error (whenever it is “large”). Index Terms/Keywords: compressed sensing, Kalman filtering, compressive sampling, sequential MMSE estimation
Fast Bayesian compressive sensing using Laplace priors
 in IEEE Int. Conf. on Acoustics, Speech, and Sig. Proc. (ICASSP09
, 2009
"... In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree ..."
Abstract

Cited by 68 (11 self)
 Add to MetaCart
(Show Context)
In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree of sparsity. We develop a constructive (greedy) algorithm resulting from this formulation where necessary parameters are estimated solely from the observation and therefore no userintervention is needed. We provide experimental results with synthetic 1D signals and images, and compare with the stateoftheart CS reconstruction algorithms demonstrating the superior performance of the proposed approach. Index Terms — Bayesian methods, compressive sensing, inverse problems, sparse Bayesian learning, relevance vector machine
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
 IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract

Cited by 59 (15 self)
 Add to MetaCart
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less rowsparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.