Results 1  10
of
12
Dependent Hierarchical Beta Process for Image Interpolation and Denoising 1
"... A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features, with covariatedependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are l ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features, with covariatedependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are likely to be manifested in terms of similar features. Coupling the dHBP with the Bernoulli process, and upon marginalizing out the dHBP, the model may be interpreted as a covariatedependent hierarchical Indian buffet process. As applications, we consider interpolation and denoising of an image, with covariates defined by the location of image patches within an image. Two types of noise models are considered: (i) typical white Gaussian noise; and (ii) spiky noise of arbitrary amplitude, distributed uniformly at random. In these examples, the features correspond to the atoms of a dictionary, learned based upon the data under test (without a priori training data). Stateoftheart performance is demonstrated, with efficient inference using hybrid Gibbs, MetropolisHastings and slice sampling.
A stickbreaking construction of the beta process (Technical Report
, 2009
"... We present and derive a new stickbreaking construction of the beta process. The construction is closely related to a special case of the stickbreaking construction of the Dirichlet process (Sethuraman, 1994) applied to the beta distribution. We derive an inference procedure that relies on Monte Ca ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We present and derive a new stickbreaking construction of the beta process. The construction is closely related to a special case of the stickbreaking construction of the Dirichlet process (Sethuraman, 1994) applied to the beta distribution. We derive an inference procedure that relies on Monte Carlo integration to reduce the number of parameters to be inferred, and present results on synthetic data, the MNIST handwritten digits data set and a timeevolving gene expression data set. 1.
Nonparametric bayesian matrix completion
 In SAM
, 2010
"... Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as wel ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as well as a measure of confidence in each prediction. Algorithm performance is considered for several datasets, with encouraging performance relative to existing approaches. I.
Graphical Models for Biclustering . . .
, 2012
"... The cell coordinates its biological response to the environment partly via the selective synthesis of thousands of unique RNA and protein molecules. Understanding the molecular biology of the cell is thus essential to the advancement of areas such as health care, agriculture, and energy production, ..."
Abstract
 Add to MetaCart
The cell coordinates its biological response to the environment partly via the selective synthesis of thousands of unique RNA and protein molecules. Understanding the molecular biology of the cell is thus essential to the advancement of areas such as health care, agriculture, and energy production, but requires the ability to simultaneously acquire information about thousands of molecules in a sample. Recent highthroughput measurement technologies address this concern. While being useful, they generate a high volume of data and bring in methodological challenges, effectively shifting the bottleneck in molecular biology research from data acquisition to data analysis. In particular, an important challenge is the genomewide
Books
, 2013
"... ◮ Scientists coauthoring the same paper ◮ Readers reading the same book ◮ Internet users posting a message on the same forum ◮ Customers buying the same item ..."
Abstract
 Add to MetaCart
◮ Scientists coauthoring the same paper ◮ Readers reading the same book ◮ Internet users posting a message on the same forum ◮ Customers buying the same item
Bayesian nonparametrics and the probabilistic approach to modelling
"... be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of pro ..."
Abstract
 Add to MetaCart
be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian nonparametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian nonparametrics. The survey covers the use of Bayesian nonparametrics for modelling unknown functions, density estimation, clustering, time series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically it gives brief nontechnical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees, and Wishart processes. Key words: probabilistic modelling; Bayesian statistics; nonparametrics; machine learning. 1.
Dependent Normalized Random Measures
"... In this paper we propose two constructions of dependent normalized random measures, a class of nonparametric priors over dependent probability measures. Our constructions, which we call mixed normalized random measures (MNRM) and thinned normalized random measures (TNRM), involve (respectively) weig ..."
Abstract
 Add to MetaCart
In this paper we propose two constructions of dependent normalized random measures, a class of nonparametric priors over dependent probability measures. Our constructions, which we call mixed normalized random measures (MNRM) and thinned normalized random measures (TNRM), involve (respectively) weighting and thinning parts of a shared underlying Poisson process before combining them together. We show that both MNRM and TNRM are marginally normalized random measures, resulting in well understood theoretical properties. We develop marginal and slice samplers for both models, the latter necessary for inference in TNRM. In timevarying topic modeling experiments, both models exhibit superior performance over related dependent models such as the hierarchical Dirichlet process and the spatial normalized Gamma process. 1.
CENTRAL LIMIT THEOREMS FOR AN INDIAN BUFFET MODEL WITH RANDOM WEIGHTS
"... Abstract. The threeparameter Indian buffet process is generalized. The possibly different role played by customers is taken into account by suitable (random) weights. Various limit theorems are also proved for such generalized Indian buffet process. Let Ln be the number of dishes experimented by th ..."
Abstract
 Add to MetaCart
Abstract. The threeparameter Indian buffet process is generalized. The possibly different role played by customers is taken into account by suitable (random) weights. Various limit theorems are also proved for such generalized Indian buffet process. Let Ln be the number of dishes experimented by the first n customers, and let Kn = (1/n) ∑n i=1 Ki where Ki is the number of dishes tried by customer i. The asymptotic distributions of Ln and Kn, suitably centered and scaled, are obtained. The convergence turns out to be stable (and not only in distribution). As a particular case, the results apply to the standard (i.e., non generalized) Indian buffet process. 1.