Results 1  10
of
24
Dependent Hierarchical Beta Process for Image Interpolation and Denoising 1
"... A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features, with covariatedependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are l ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
(Show Context)
A dependent hierarchical beta process (dHBP) is developed as a prior for data that may be represented in terms of a sparse set of latent features, with covariatedependent feature usage. The dHBP is applicable to general covariates and data models, imposing that signals with similar covariates are likely to be manifested in terms of similar features. Coupling the dHBP with the Bernoulli process, and upon marginalizing out the dHBP, the model may be interpreted as a covariatedependent hierarchical Indian buffet process. As applications, we consider interpolation and denoising of an image, with covariates defined by the location of image patches within an image. Two types of noise models are considered: (i) typical white Gaussian noise; and (ii) spiky noise of arbitrary amplitude, distributed uniformly at random. In these examples, the features correspond to the atoms of a dictionary, learned based upon the data under test (without a priori training data). Stateoftheart performance is demonstrated, with efficient inference using hybrid Gibbs, MetropolisHastings and slice sampling.
A stickbreaking construction of the beta process (Technical Report
, 2009
"... We present and derive a new stickbreaking construction of the beta process. The construction is closely related to a special case of the stickbreaking construction of the Dirichlet process (Sethuraman, 1994) applied to the beta distribution. We derive an inference procedure that relies on Monte Ca ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
We present and derive a new stickbreaking construction of the beta process. The construction is closely related to a special case of the stickbreaking construction of the Dirichlet process (Sethuraman, 1994) applied to the beta distribution. We derive an inference procedure that relies on Monte Carlo integration to reduce the number of parameters to be inferred, and present results on synthetic data, the MNIST handwritten digits data set and a timeevolving gene expression data set. 1.
Nonparametric bayesian matrix completion
 In SAM
, 2010
"... Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as wel ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The BetaBinomial processes are considered for inferring missing values in matrices. The model moves beyond the lowrank assumption, modeling the matrix columns as residing in a nonlinear subspace. Largescale problems are considered via efficient Gibbs sampling, yielding predictions as well as a measure of confidence in each prediction. Algorithm performance is considered for several datasets, with encouraging performance relative to existing approaches. I.
Bayesian nonparametrics and the probabilistic approach to modelling
"... be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of pro ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian nonparametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian nonparametrics. The survey covers the use of Bayesian nonparametrics for modelling unknown functions, density estimation, clustering, time series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically it gives brief nontechnical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees, and Wishart processes. Key words: probabilistic modelling; Bayesian statistics; nonparametrics; machine learning. 1.
Priors for random count matrices derived from a family of negative binomial processes. arXiv:1404.3331v2, 2014. A Proof for Theorem 1 Proof. Let us consider the process XG, conditional on G, given by XG(A) = ∑ k nk 1(ωk ∈ A). Now it is easy to see that E[
 j=1 pij = 1. B Proof for Corollary 3 This follows directly from Bayes’ rule, since p(ziz−i, n, γ0, ρ) = p(zi,z−i,nγ0,ρ)p(z−i,nγ0,ρ) , where p(zi, z −i, nγ0, ρ) = n−1 p(z−i, n−1γ0, ρ) γ0 ∫ ∞ se−sρ(ds)1(zi = l−i + 1) + l−i∑ k=1 ∫∞ sn
"... We define a family of probability distributions for random count matrices with a potentially unbounded number of rows and columns. The three distributions we consider are derived from the gammaPoisson, gammanegative binomial (GNB), and betanegative binomial (BNB) processes, which we refer to gene ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
We define a family of probability distributions for random count matrices with a potentially unbounded number of rows and columns. The three distributions we consider are derived from the gammaPoisson, gammanegative binomial (GNB), and betanegative binomial (BNB) processes, which we refer to generically as a family of negativebinomial processes. Because the models lead to closedform update equations within the context of a Gibbs sampler, they are natural candidates for nonparametric Bayesian priors over count matrices. A key aspect of our analysis is the recognition that, although the random count matrices within the family are defined by a rowwise construction, their columns can be shown to be independent and identically distributed; this fact is used to derive explicit formulas for drawing all the columns at once. Moreover, by analyzing these matrices ’ combinatorial structure, we describe how to sequentially construct a columni.i.d. random count matrix one row at a time, and derive the predictive distribution of a new row count vector with previously unseen features. We describe the similarities and differences between the three priors, and argue that the greater flexibility of the GNB and BNB processes—especially their ability to model overdispersed, heavytailed count data—makes these well suited to a wide variety of realworld applications. As an example of our framework, we construct a naiveBayes text classifier to categorize a count vector to one of several existing random count matrices of different categories. The classifier supports an unbounded number of features, and unlike most existing methods, it does not require a predefined finite vocabulary to be shared by all the categories. Both the gamma and beta negative binomial processes are shown to significantly outperform the gammaPoisson process when applied to document categorization. The authors are with the Department of Information, Risk, and Operations Management
The combinatorial structure of beta negative binomial processes. arXiv:1401.0062
, 2013
"... ar ..."
Dependent Normalized Random Measures
"... In this paper we propose two constructions of dependent normalized random measures, a class of nonparametric priors over dependent probability measures. Our constructions, which we call mixed normalized random measures (MNRM) and thinned normalized random measures (TNRM), involve (respectively) weig ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper we propose two constructions of dependent normalized random measures, a class of nonparametric priors over dependent probability measures. Our constructions, which we call mixed normalized random measures (MNRM) and thinned normalized random measures (TNRM), involve (respectively) weighting and thinning parts of a shared underlying Poisson process before combining them together. We show that both MNRM and TNRM are marginally normalized random measures, resulting in well understood theoretical properties. We develop marginal and slice samplers for both models, the latter necessary for inference in TNRM. In timevarying topic modeling experiments, both models exhibit superior performance over related dependent models such as the hierarchical Dirichlet process and the spatial normalized Gamma process. 1.
CENTRAL LIMIT THEOREMS FOR AN INDIAN BUFFET MODEL WITH RANDOM WEIGHTS
"... Abstract. The threeparameter Indian buffet process is generalized. The possibly different role played by customers is taken into account by suitable (random) weights. Various limit theorems are also proved for such generalized Indian buffet process. Let Ln be the number of dishes experimented by th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The threeparameter Indian buffet process is generalized. The possibly different role played by customers is taken into account by suitable (random) weights. Various limit theorems are also proved for such generalized Indian buffet process. Let Ln be the number of dishes experimented by the first n customers, and let Kn = (1/n) ∑n i=1 Ki where Ki is the number of dishes tried by customer i. The asymptotic distributions of Ln and Kn, suitably centered and scaled, are obtained. The convergence turns out to be stable (and not only in distribution). As a particular case, the results apply to the standard (i.e., non generalized) Indian buffet process. 1.