Results 1  10
of
495
Infinite Mixtures of Gaussian Process Experts
 In Advances in Neural Information Processing Systems 14
, 2001
"... We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gaussian Process (GP) regression models. Using an inputdependent adaptation of the Dirichlet Process, we implement a gating network for an infinite number of Experts. Inference in this model may be do ..."
Abstract

Cited by 111 (6 self)
 Add to MetaCart
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gaussian Process (GP) regression models. Using an inputdependent adaptation of the Dirichlet Process, we implement a gating network for an infinite number of Experts. Inference in this model may
Infinite Mixtures of Trees
 ICML2007
, 2007
"... Finite mixtures of treestructured distributions have been shown to be efficient and effective in modeling multivariate distributions. Using Dirichlet processes, we extend this approach to allow countably many treestructured mixture components. The resulting Bayesian framework allows us to deal with ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Finite mixtures of treestructured distributions have been shown to be efficient and effective in modeling multivariate distributions. Using Dirichlet processes, we extend this approach to allow countably many treestructured mixture components. The resulting Bayesian framework allows us to deal
Flexible Priors for Infinite Mixture Models
"... Most infinite mixture models in the current literature are based on the Dirichlet process prior. This prior on partitions implies a very specific (a priori) distribution on cluster sizes. A slightly more general prior known as the PitmanYor process prior generalizes this to a twoparameter family. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Most infinite mixture models in the current literature are based on the Dirichlet process prior. This prior on partitions implies a very specific (a priori) distribution on cluster sizes. A slightly more general prior known as the PitmanYor process prior generalizes this to a twoparameter family
An alternative infinite mixture of gaussian process experts
 In Advances In Neural Information Processing Systems
"... We present an infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a Gaussian Process model over an output space. Our model is neatly able to deal with nonstationary covariance functions, discontinuities, multimodality and overlappin ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
We present an infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a Gaussian Process model over an output space. Our model is neatly able to deal with nonstationary covariance functions, discontinuities, multimodality
Denoising with infinite mixture of Gaussians,” presented at the Eur
 Signal Process. Conf. (EUSIPCO
, 2005
"... We show in this paper how an Infinite Mixture of Gaussians model can be used to estimate/denoise nonGaussian data with local linear estimators based on the Wiener filter. The decomposition of the data in Gaussian components is straightforwardly computed with the Gaussian Transform, previously deriv ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We show in this paper how an Infinite Mixture of Gaussians model can be used to estimate/denoise nonGaussian data with local linear estimators based on the Wiener filter. The decomposition of the data in Gaussian components is straightforwardly computed with the Gaussian Transform, previously
Prior Information Based Bayesian Infinite Mixture Model
"... Abstract Unsupervised learning methods have been tremendously successful in extracting knowledge from genomics data generated by high throughput experimental assays. However, analysis of each dataset in isolation without incorporating potentially informative prior knowledge is limiting the utility ..."
Abstract
 Add to MetaCart
of such procedures. Here we present a novel probabilistic model and computational algorithm for semisupervised learning from genomics data. The probabilistic model is an extension of the Bayesian semiparametric Gaussian Infinite Mixture Model (GIMM) and training of model parameters is performed using Markov Chain
Expectation propagation for infinite mixtures (Extended abstract)
, 2003
"... This note describes a method for approximate inference in infinite models that uses deterministic Expectation Propagation instead of Monte Carlo. For infinite Gaussian mixtures, the algorithm provides cluster parameter estimates, cluster memberships, and model evidence. Model parameters, such as the ..."
Abstract
 Add to MetaCart
This note describes a method for approximate inference in infinite models that uses deterministic Expectation Propagation instead of Monte Carlo. For infinite Gaussian mixtures, the algorithm provides cluster parameter estimates, cluster memberships, and model evidence. Model parameters
Infinite mixtures for multirelational categorical data
"... Large relational datasets are prevalent in many fields. We propose an unsupervised component model for relational data, i.e., for heterogeneous collections of categorical cooccurrences. The cooccurrences can be dyadic or nadic, and over the same or different categorical variables. Graphs are a sp ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Large relational datasets are prevalent in many fields. We propose an unsupervised component model for relational data, i.e., for heterogeneous collections of categorical cooccurrences. The cooccurrences can be dyadic or nadic, and over the same or different categorical variables. Graphs are a special case, as collections of dyadic cooccurrences (edges) over a set of vertices. The model is simple, with only one latent variable. This allows wide applicability as long as a global latent component solution is preferred, and the generative process fits the application. Estimation with a collapsed Gibbs sampler is straightforward. We demostrate the model with graphs enriched with multinomial vertex properties, or more conceretely, with two sets of scientific papers, with both content and citation information available. 1.
Results 1  10
of
495