Results 1  10
of
34
Chilasso: A collaborative hierarchical sparse modeling framework
 Signal Processing, IEEE Transactions on
"... Abstract—Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an 1regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsityinducing property of t ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Abstract—Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an 1regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsityinducing property of the Lasso at the individual feature level, with the blocksparsity property of the Group Lasso, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (CHiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for applications such as source identification and separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the framework and optimization approach is complemented by experimental examples and theoretical results regarding recovery guarantees. Index Terms—Collaborative coding, hierarchical models, sparse models, structured sparsity. I.
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
, 2010
"... A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAPEM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is describe ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAPEM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAPEM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost. 1 I.
Learning sparse codes for hyperspectral imagery
 IEEE Journal of Selected Topics in Signal Processing
, 2011
"... The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is wellmatched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember ” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: i) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials, ii) this learned dictionary can be used to infer HSIresolution data with very high accuracy from simulated imagery collected at multispectrallevel resolution, and iii) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.
Nonparametric bayesian matrix completion
 In SAM
, 2010
"... Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as wel ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as well as a measure of confidence in each prediction. Algorithm performance is considered for several datasets, with encouraging performance relative to existing approaches. I.
The Hierarchical Beta Process for Convolutional Factor Analysis and Deep Learning
"... A convolutional factoranalysis model is developed, with the number of filters (factors) inferred via the beta process (BP) and hierarchical BP, for singletask and multitask learning, respectively. The computation of the model parameters is implemented within a Bayesian setting, employing Gibbs sa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A convolutional factoranalysis model is developed, with the number of filters (factors) inferred via the beta process (BP) and hierarchical BP, for singletask and multitask learning, respectively. The computation of the model parameters is implemented within a Bayesian setting, employing Gibbs sampling; we explicitly exploit the convolutional nature of the expansion to accelerate computations. The model is used in a multilevel (“deep”) analysis of general data, with specific results presented for imageprocessing data sets, e.g., classification. 1.
On the Integration of Topic Modeling and Dictionary Learning
"... A new nonparametric Bayesian model is developed to integrate dictionary learning and topic model into a unified framework. The model is employed to analyze partially annotated images, with the dictionary learning performed directly on image patches. Efficient inference is performed with a Gibbsslice ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A new nonparametric Bayesian model is developed to integrate dictionary learning and topic model into a unified framework. The model is employed to analyze partially annotated images, with the dictionary learning performed directly on image patches. Efficient inference is performed with a Gibbsslice sampler, and encouraging results are reported on widely used datasets. 1.
Sparse Signal Recovery and Acquisition with Graphical Models  A review of a broad set of sparse models, analysis tools, and recovery algorithms within the graphical models formalism
, 2010
"... Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R N as the signal, y [ R M as measurements with M, N, F[R M3N as the measurement matrix, and n [ R M as the noise. The measurement matrix F is a matrix with random entries in data streaming, an overcomplete dictionary of features in sparse Bayesian learning, or a code matrix in communications [1]–[3]. Extracting x from y in (1) is ill posed in general since M, N and the measurement matrix F hence has a nontrivial null space; given any vector v in this null space, x 1 v defines a solution that produces the same observations y. Additional information is therefore necessary to distinguish the true x among the infinitely many possible solutions [1], [2], [4], [5]. It is now well known that sparse
DISCRIMINATIVE SPARSE REPRESENTATIONS IN HYPERSPECTRAL IMAGERY By
, 2010
"... Recent advances in sparse modeling and dictionary learning for discriminative applications show high potential for numerous classification tasks. In this paper, we show that highly accurate material classification from hyperspectral imagery (HSI) can be obtained with these models, even when the data ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Recent advances in sparse modeling and dictionary learning for discriminative applications show high potential for numerous classification tasks. In this paper, we show that highly accurate material classification from hyperspectral imagery (HSI) can be obtained with these models, even when the data is reconstructed from a very small percentage of the original image samples. The proposed supervised HSI classification is performed using a measure that accounts for both reconstruction errors and sparsity levels for sparse representations based on classdependent learned dictionaries. Combining the dictionaries learned for the different materials, a linear mixing model is derived for subpixel classification. Results with real hyperspectral data cubes are shown both for urban and nonurban terrain.
The Kernel Beta Process
"... A new Lévy process prior is proposed for an uncountable collection of covariatedependent featurelearning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (“cu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A new Lévy process prior is proposed for an uncountable collection of covariatedependent featurelearning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (“customer”), and latent covariates learned for each feature (“dish”). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to “consider ” a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and stateoftheart results are presented for image processing and music analysis tasks. 1
Dimensionality Reduction Using the Sparse Linear Model
"... We propose an approach for linear unsupervised dimensionality reduction, based on the sparse linear model that has been used to probabilistically interpret sparse coding. We formulate an optimization problem for learning a linear projection from the original signal domain to a lowerdimensional one ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose an approach for linear unsupervised dimensionality reduction, based on the sparse linear model that has been used to probabilistically interpret sparse coding. We formulate an optimization problem for learning a linear projection from the original signal domain to a lowerdimensional one in a way that approximately preserves, in expectation, pairwise inner products in the sparse domain. We derive solutions to the problem, present nonlinear extensions, and discuss relations to compressed sensing. Our experiments using facial images, texture patches, and images of object categories suggest that the approach can improve our ability to recover meaningful structure in many classes of signals. 1