Results 1  10
of
36
A tutorial on Bayesian nonparametric models
 Journal of Mathematical Psychology
"... A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial we describ ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial we describe Bayesian nonparametric methods, a class of methods that sidesteps this issue by allowing the data to determine the complexity of the model. This tutorial is a highlevel introduction to Bayesian nonparametric methods and contains several examples of their application. 1
MaxMargin Nonparametric Latent Feature Models for Link Prediction
"... We present a maxmargin nonparametric latent feature relational model, which unites the ideas of maxmargin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction and automatically infer the unknown latent social dimension. By minimizing a hingeloss usi ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
We present a maxmargin nonparametric latent feature relational model, which unites the ideas of maxmargin learning and Bayesian nonparametrics to discover discriminative latent features for link prediction and automatically infer the unknown latent social dimension. By minimizing a hingeloss using the linear expectation operator, we can perform posterior inference efficiently without dealing with a highly nonlinear link likelihood function; by using a fullyBayesian formulation, we can avoid tuning regularization constants. Experimental results on real datasets appear to demonstrate the benefits inherited from maxmargin learning and fullyBayesian nonparametric inference. 1.
Infinite Latent SVM for Classification and Multitask Learning
"... Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indir ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
(Show Context)
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes ’ theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multitask infinite latent support vector machines (MTiLSVM), which explore the largemargin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multitask learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both largemargin learning and Bayesian nonparametrics. 1
Dependent Indian Buffet Processes
"... Latent variable models represent hidden structure in observational data. To account for the distribution of the observational data changing over time, space or some other covariate, we need generalizations of latent variable models that explicitly capture this dependency on the covariate. A variety ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Latent variable models represent hidden structure in observational data. To account for the distribution of the observational data changing over time, space or some other covariate, we need generalizations of latent variable models that explicitly capture this dependency on the covariate. A variety of such generalizations has been proposed for latent variable models based on the Dirichlet process. We address dependency on covariates in binary latent feature models, by introducing a dependent Indian buffet process. The model generates, for each value of the covariate, a binary random matrix with an unbounded number of columns. Evolution of the binary matrices over the covariate set is controlled by a hierarchical Gaussian process model. The choice of covariance functions controls the dependence structure and exchangeability properties of the model. We derive a Markov Chain Monte Carlo sampling algorithm for Bayesian inference, and provide experiments on both synthetic and realworld data. The experimental results show that explicit modeling of dependencies significantly improves accuracy of predictions. 1
Large Scale Nonparametric Bayesian Inference: Data Parallelisation in the Indian Buffet Process
"... Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the highdimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Nonparametric Bayesian models provide a framework for flexible probabilistic modelling of complex datasets. Unfortunately, the highdimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling Bayesian inference to the increasingly large datasets found in realworld applications. We focus on parallelisation of inference in the Indian Buffet Process (IBP), which allows data points to have an unbounded number of sparse latent features. Our novel MCMC sampler divides a large data set between multiple processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBPbased models, scales to datasets orders of magnitude larger than have previously been possible. 1
Variational Inference for StickBreaking Beta Process Priors
"... We present a variational Bayesian inference algorithm for the stickbreaking construction of the beta process. We derive an alternate representation of the beta process that is amenable to variational inference, and present a bound relating the truncated beta process to its infinite counterpart. We ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We present a variational Bayesian inference algorithm for the stickbreaking construction of the beta process. We derive an alternate representation of the beta process that is amenable to variational inference, and present a bound relating the truncated beta process to its infinite counterpart. We assess performance on two matrix factorization problems, using a nonnegative factorization model and a linearGaussian model. 1.
StickBreaking Beta Processes and the Poisson Process
"... We show that the stickbreaking construction of the beta process due to Paisley et al. (2010) can be obtained from the characterization of the beta process as a Poisson process. Specifically, we show that the mean measure of the underlying Poisson process is equal to that of the beta process. We use ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We show that the stickbreaking construction of the beta process due to Paisley et al. (2010) can be obtained from the characterization of the beta process as a Poisson process. Specifically, we show that the mean measure of the underlying Poisson process is equal to that of the beta process. We use this underlying representation to derive error bounds on truncated beta processes that are tighter than those in the literature. We also develop a new MCMC inference algorithm for beta processes, based in part on our new Poisson process construction. 1
III. Flexible modeling of latent task structures in multitask learning
 In Proceedings of International Conference on Machine Learning, 2012. Acharya et al
"... Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks. However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem. Ideally, the “right ” latent task struc ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks. However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem. Ideally, the “right ” latent task structure should be learned in a datadriven manner. We present a flexible, nonparametric Bayesian model that posits a mixture of factor analyzers structure on the tasks. The nonparametric aspect makes the model expressive enough to subsume many existing models of latent task structures (e.g, meanregularized tasks, clustered tasks, lowrank or linear/nonlinear subspace assumption on tasks, etc.). Moreover, it can also learn more general task structures, addressing the shortcomings of such models. We present a variational inference algorithm for our model. Experimental results on synthetic and realworld datasets, on both regression and classification problems, demonstrate the effectiveness of the proposed method. 1.
Nonparametric maxmargin matrix factorization for collaborative prediction
 In Advances in Neural Information Processing Systems 25
, 2012
"... We present a probabilistic formulation of maxmargin matrix factorization and build accordingly a nonparametric Bayesian model which automatically resolves the unknown number of latent factors. Our work demonstrates a successful example that integrates Bayesian nonparametrics and maxmargin learning ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We present a probabilistic formulation of maxmargin matrix factorization and build accordingly a nonparametric Bayesian model which automatically resolves the unknown number of latent factors. Our work demonstrates a successful example that integrates Bayesian nonparametrics and maxmargin learning, which are conventionally two separate paradigms and enjoy complementary advantages. We develop an efficient variational algorithm for posterior inference, and our extensive empirical studies on largescale MovieLens and EachMovie data sets appear to justify the aforementioned dual advantages. 1