Results 1 
9 of
9
The Kernel Beta Process
"... A new Lévy process prior is proposed for an uncountable collection of covariatedependent featurelearning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (“cu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A new Lévy process prior is proposed for an uncountable collection of covariatedependent featurelearning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (“customer”), and latent covariates learned for each feature (“dish”). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to “consider ” a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and stateoftheart results are presented for image processing and music analysis tasks. 1
Distance Dependent Infinite Latent Feature Models
, 2011
"... Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalizat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (ddIBP), for modeling nonexchangeable data. It relies on a distance function defined between data points, biasing nearby data to share more features. The choice of distance function allows for many kinds of dependencies, including temporal or spatial. Further, the original IBP is a special case of the ddIBP. In this paper, we develop the ddIBP and theoretically characterize the distribution of how features are shared between data. We derive a Markov chain Monte Carlo sampler for a linear Gaussian model with a ddIBP prior and study its performance on several data sets for which exchangeability is not a reasonable assumption.
1 Coded Hyperspectral Imaging and Blind Compressive Sensing
"... Blind compressive sensing (CS) is considered for reconstruction of hyperspectral data imaged by a coded aperture camera. The measurements are manifested as a superposition of the coded wavelengthdependent data, with the ambient threedimensional hyperspectral datacube mapped to a twodimensional mea ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Blind compressive sensing (CS) is considered for reconstruction of hyperspectral data imaged by a coded aperture camera. The measurements are manifested as a superposition of the coded wavelengthdependent data, with the ambient threedimensional hyperspectral datacube mapped to a twodimensional measurement. The hyperspectral datacube is recovered using a Bayesian implementation of blind CS. Several demonstration experiments are presented, including measurements performed using a coded aperture snapshot spectral imager (CASSI) camera. The proposed approach is capable of efficiently reconstructing large hyperspectral datacubes. Comparisons are made between the proposed algorithm and other techniques employed in compressive sensing, dictionary learning and matrix factorization. Index Terms hyperspectral images, image reconstruction, projective transformation, dictionary learning, nonparametric Bayesian, BetaBernoulli model, coded aperture snapshot spectral imager (CASSI). I.
A unifying representation for a class of dependent random measures
, 1211
"... We present a general construction for dependent random measures based on thinning Poisson processes on an augmented space. The framework is not restricted to dependent versions of a specific nonparametric model, but can be applied to all models that can be represented using completely random measure ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present a general construction for dependent random measures based on thinning Poisson processes on an augmented space. The framework is not restricted to dependent versions of a specific nonparametric model, but can be applied to all models that can be represented using completely random measures. Several existing dependent random measures can be seen as specific cases of this framework. Interesting properties of the resulting measures are derived and the efficacy of the framework is demonstrated by constructing a covariatedependent latent feature model and topic model that obtain superior predictive performance. 1
Nonparametric discovery of activity patterns from video collections
 In CVPR Workshop on Perceptual Organization in Computer Vision
, 2012
"... We propose a nonparametric framework based on the beta process for discovering temporal patterns within a heterogenous video collection. Starting from quantized local motion descriptors, we describe the longrange temporal dynamics of each video via transitions between a set of dynamical behaviors. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a nonparametric framework based on the beta process for discovering temporal patterns within a heterogenous video collection. Starting from quantized local motion descriptors, we describe the longrange temporal dynamics of each video via transitions between a set of dynamical behaviors. Bayesian nonparametric statistical methods allow the number of such behaviors and the subset exhibited by each video to be learned without supervision. We extend the earlier beta process HMM in two ways: adding datadriven MCMC moves to improve inference on realistic datasets and allowing global sharing of behavior transition parameters. We illustrate discovery of intuitive and useful dynamical structure, at various temporal scales, from videos of simple exercises, recipe preparation, and Olympic sports. Segmentation and retrieval experiments show the benefits of our nonparametric approach. 1.
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images
"... Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete and/or noisy measurements. A truncated betaBernoulli process is employed to infer an appropriate dictionary for the data under test, and also for image recovery. In the context of compressive se ..."
Abstract
 Add to MetaCart
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete and/or noisy measurements. A truncated betaBernoulli process is employed to infer an appropriate dictionary for the data under test, and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressivemeasurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, selected uniformly at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stickbreaking processes. Several example results are presented, with comparisons to other methods in the literature. I.
AugmentandConquer Negative Binomial Processes
"... By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gammaNB pro ..."
Abstract
 Add to MetaCart
(Show Context)
By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gammaNB process can be reduced to the hierarchical Dirichlet process with normalization, highlighting its unique theoretical, structural and computational advantages. A variety of NB processes with distinct sharing mechanisms are constructed and applied to topic modeling, with connections to existing algorithms, showing the importance of inferring both the NB dispersion and probability parameters. 1
Declaration
, 2011
"... The attached document may provide the author's accepted version of a published work. ..."
Abstract
 Add to MetaCart
The attached document may provide the author's accepted version of a published work.
Stochastic Blockmodel with Cluster Overlap, Relevance Selection, and SimilarityBased Smoothing
"... Abstract—Stochastic blockmodels provide a rich, probabilistic framework for modeling relational data by expressing the objects being modeled in terms of a latent vector representation. This representation can be a latent indicator vector denoting the cluster membership (hard clustering), a vector ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Stochastic blockmodels provide a rich, probabilistic framework for modeling relational data by expressing the objects being modeled in terms of a latent vector representation. This representation can be a latent indicator vector denoting the cluster membership (hard clustering), a vector of cluster membership probabilities (soft clustering), or more generally a realvalued vector (latent space representation). Recently, a new class of overlapping stochastic blockmodels has been proposed where the idea is to allow the objects to have hard memberships in multiple clusters (in form of a latent binary vector). This aspect captures the properties of many realworld networks in domains such as biology and social networks where objects can simultaneously have memberships in multiple clusters owing to the multiple roles they may have. In this paper, we improve upon this model in three key ways: