Results 11  20
of
229
Hierarchical Gaussian process latent variable models
 In International Conference in Machine Learning
, 2007
"... The Gaussian process latent variable model (GPLVM) is a powerful approach for probabilistic modelling of high dimensional data through dimensional reduction. In this paper we extend the GPLVM through hierarchies. A hierarchical model (such as a tree) allows us to express conditional independencies ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
(Show Context)
The Gaussian process latent variable model (GPLVM) is a powerful approach for probabilistic modelling of high dimensional data through dimensional reduction. In this paper we extend the GPLVM through hierarchies. A hierarchical model (such as a tree) allows us to express conditional independencies in the data as well as the manifold structure. We first introduce Gaussian process hierarchies through a simple dynamical model, we then extend the approach to a more complex hierarchy which is applied to the visualisation of human motion data sets. 1.
Topologicallyconstrained latent variable models
 In ICML ’08: Proceedings of the 25th international conference on Machine learning
, 2008
"... ..."
The Joint Manifold Model for Semisupervised Multivalued Regression
"... Many computer vision tasks may be expressed as the problem of learning a mapping between image space and a parameter space. For example, in human body pose estimation, recent research has directly modelled the mapping from image features (z) to joint angles (θ). Fitting such models requires training ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
Many computer vision tasks may be expressed as the problem of learning a mapping between image space and a parameter space. For example, in human body pose estimation, recent research has directly modelled the mapping from image features (z) to joint angles (θ). Fitting such models requires training data in the form of labelled (z, θ) pairs, from which are learned the conditional densities p(θz). Inference is then simple: given test image features z, the conditional p(θz) is immediately computed. However large amounts of training data are required to fit the models, particularly in the case where the spaces are high dimensional. We show how the use of unlabelled data—samples from the marginal distributions p(z) and p(θ)—may be used to improve fitting. This is valuable because it is often significantly easier to obtain unlabelled than labelled samples. We use a Gaussian process latent variable model to learn the mapping from a shared latent lowdimensional manifold to the feature and parameter spaces. This extends existing approaches to (a) use unlabelled data, and (b) represent onetomany mappings. Experiments on synthetic and real problems demonstrate how the use of unlabelled data improves over existing techniques. In our comparisons, we include existing approaches that are explicitly semisupervised as well as those which implicitly make use of unlabelled examples. 1.
Stochastic backpropagation and approximate inference in deep generative models
, 2014
"... We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distri ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation – rules for gradient backpropagation through stochastic variables – and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several realworld data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for highdimensional data visualisation. 1.
Multifactor Gaussian Process Models for StyleContent Separation
"... We introduce models for density estimation with multiple, hidden, continuous factors. In particular, we propose a generalization of multilinear models using nonlinear basis functions. By marginalizing over the weights, we obtain a multifactor form of the Gaussian process latent variable model. In th ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
We introduce models for density estimation with multiple, hidden, continuous factors. In particular, we propose a generalization of multilinear models using nonlinear basis functions. By marginalizing over the weights, we obtain a multifactor form of the Gaussian process latent variable model. In this model, each factor is kernelized independently, allowing nonlinear mappings from any particular factor to the data. We learn models for human locomotion data, in which each pose is generated by factors representing the person’s identity, gait, and the current state of motion. We demonstrate our approach using timeseries prediction, and by synthesizing novel animation from the model. 1.
Analysis of population structure: a unifying framework and novel methods based on sparse factor analysis, PLoS genetics 2010;6:e1001117
"... Abstract We consider the statistical analysis of population structure using genetic data. We show how the two most widely used approaches to modeling population structure, admixturebased models and principal components analysis (PCA), can be viewed within a single unifying framework of matrix fact ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
(Show Context)
Abstract We consider the statistical analysis of population structure using genetic data. We show how the two most widely used approaches to modeling population structure, admixturebased models and principal components analysis (PCA), can be viewed within a single unifying framework of matrix factorization. Specifically, they can both be interpreted as approximating an observed genotype matrix by a product of two lowerrank matrices, but with different constraints or prior distributions on these lowerrank matrices. This opens the door to a large range of possible approaches to analyzing population structure, by considering other constraints or priors. In this paper, we introduce one such novel approach, based on sparse factor analysis (SFA). We investigate the effects of the different types of constraint in several real and simulated data sets. We find that SFA produces similar results to admixturebased models when the samples are descended from a few welldifferentiated ancestral populations and can recapitulate the results of PCA when the population structure is more ''continuous,'' as in isolationbydistance models.
Dynamical Binary Latent Variable Models for 3D Human Pose Tracking  Supplementary Material
"... ..."
(Show Context)
Latent Force Models
"... Purely data driven approaches for machine learning present difficulties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (w ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
(Show Context)
Purely data driven approaches for machine learning present difficulties when data is scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data driven modelling with a physical model of the system. We show how different, physicallyinspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from computational biology, motion capture and geostatistics. 1
Betanegative binomial process and Poisson factor analysis
 In AISTATS
, 2012
"... A betanegative binomial (BNB) process is proposed, leading to a betagammaPoisson process, which may be viewed as a “multiscoop” generalization of the betaBernoulli process. The BNB process is augmented into a betagammagammaPoisson hierarchical structure, and applied as a nonparametric Bayesia ..."
Abstract

Cited by 28 (15 self)
 Add to MetaCart
(Show Context)
A betanegative binomial (BNB) process is proposed, leading to a betagammaPoisson process, which may be viewed as a “multiscoop” generalization of the betaBernoulli process. The BNB process is augmented into a betagammagammaPoisson hierarchical structure, and applied as a nonparametric Bayesian prior for an infinite Poisson factor analysis model. A finite approximation for the beta process Lévy random measure is constructed for convenient implementation. Efficient MCMC computations are performed with data augmentation and marginalization techniques. Encouraging results are shown on document count matrix factorization. 1
Learning GPBayesFilters via Gaussian process latent variable models
 In Proceedings of robotics: science and systems (RSS
, 2009
"... Abstract — GPBayesFilters are a general framework for integrating Gaussian process prediction and observation models into Bayesian filtering techniques, including particle filters and extended and unscented Kalman filters. GPBayesFilters learn nonparametric filter models from training data contain ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Abstract — GPBayesFilters are a general framework for integrating Gaussian process prediction and observation models into Bayesian filtering techniques, including particle filters and extended and unscented Kalman filters. GPBayesFilters learn nonparametric filter models from training data containing sequences of control inputs, observations, and ground truth states. The need for ground truth states limits the applicability of GPBayesFilters to systems for which the ground truth can be estimated without prohibitive overhead. In this paper we introduce GPBFLEARN, a framework for training GPBayesFilters without any ground truth states. Our approach extends Gaussian Process Latent Variable Models to the setting of dynamical robotics systems. We show how weak labels for the ground truth states can be incorporated into the GPBFLEARN framework. The approach is evaluated using a difficult tracking task, namely tracking a slotcar based on IMU measurements only. I.