Results 1  10
of
17
Logistic StickBreaking Process
"... Editor: A logistic stickbreaking process (LSBP) is proposed for nonparametric clustering of general spatially or temporallydependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functi ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Editor: A logistic stickbreaking process (LSBP) is proposed for nonparametric clustering of general spatially or temporallydependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stickbreaking process (HLSBP). The model parameters (atoms) within the HLSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries.
CONVERGENCE OF LATENT MIXING MEASURES IN FINITE AND INFINITE MIXTURE MODELS
, 2013
"... This paper studies convergence behavior of latent mixing measures that arise in finite and infinite mixture models, using transportation distances (i.e., Wasserstein metrics). The relationship between Wasserstein distances on the space of mixing measures and fdivergence functionals such as Hellinge ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
This paper studies convergence behavior of latent mixing measures that arise in finite and infinite mixture models, using transportation distances (i.e., Wasserstein metrics). The relationship between Wasserstein distances on the space of mixing measures and fdivergence functionals such as Hellinger and Kullback–Leibler distances on the space of mixture distributions is investigated in detail using various identifiability conditions. Convergence in Wasserstein metrics for discrete measures implies convergence of individual atoms that provide support for the measures, thereby providing a natural interpretation of convergence of clusters in clustering applications where mixture models are typically employed. Convergence rates of posterior distributions for latent mixing measures are established, for both finite mixtures of multivariate distributions and infinite mixtures based on the Dirichlet process.
Dimension adaptability of Gaussian process models with variable selection and projection
, 2011
"... ar ..."
(Show Context)
Multivariate kernel partition process mixtures
 Statistica Sinica
, 2010
"... Abstract: Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract: Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highlyflexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatiallyproximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors.
Adaptive Gaussian Predictive Process Approximation
"... We address the issue of knots selection for Gaussian predictive process methodology. Predictive process approximation provides an effective solution to the cubic order computational complexity of Gaussian process models. This approximation crucially depends on a set of points, called knots, at which ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We address the issue of knots selection for Gaussian predictive process methodology. Predictive process approximation provides an effective solution to the cubic order computational complexity of Gaussian process models. This approximation crucially depends on a set of points, called knots, at which the original process is retained, while the rest is approximated via a deterministic extrapolation. Knots should be few in number to keep the computational complexity low, but provide a good coverage of the process domain to limit approximation error. We present theoretical calculations to show that coverage must be judged by the canonical metric of the Gaussian process. This necessitates having in place a knots selection algorithm that automatically adapts to the changes in the canonical metric affected by changes in the parameter values controlling the Gaussian process covariance function. We present an algorithm toward this by employing an incomplete Cholesky factorization with pivoting and dynamic stopping. Although these concepts already exist in the literature, our contribution lies in unifying them into a fast algorithm and in using computable error bounds to finesse implementation of the predictive process approximation. The resulting adaptive predictive process offers a substantial automatization of Guassian process model fitting, especially for Bayesian applications where thousands of values of the covariance parameters are to be explored.
Multivariate Spatial Nonparametric Modelling via Kernel
 Processes Mixing. Mimeo Series #2622 Statistics Department, NCSU. http://www.stat.ncsu.edu/library/mimeo.html
, 2008
"... In this paper we develop a nonparametric multivariate spatial model that avoids specifying a Gaussian distribution for spatial random effects. Our nonparametric model extends the stickbreaking (SB) prior of Sethuraman (1994), which is frequently used in Bayesian modelling to capture uncertainty in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we develop a nonparametric multivariate spatial model that avoids specifying a Gaussian distribution for spatial random effects. Our nonparametric model extends the stickbreaking (SB) prior of Sethuraman (1994), which is frequently used in Bayesian modelling to capture uncertainty in the parametric form of an outcome. The stickbreaking prior is extended here to the spatial setting by assigning each location a different, unknown distribution, and smoothing the distributions in space with a series of spacedependent kernel functions that have a spacevarying bandwidth parameter. This results in a flexible nonstationary spatial model, as different kernel functions lead to different relationships between the
Marginally Specified Priors for Nonparametric Bayesian Estimation
, 2012
"... Prior specification for nonparametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. Realistically, a statistician is unlikely to have informed opinions about all aspects of such a parameter, but may have real infor ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Prior specification for nonparametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. Realistically, a statistician is unlikely to have informed opinions about all aspects of such a parameter, but may have real information about functionals of the parameter, such the population mean or variance. This article proposes a new framework for nonparametric Bayes inference in which the prior distribution for a possibly infinitedimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a nonparametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard nonparametric prior distributions in common use, and inherit the large support of the standard priors upon which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard nonparametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modeling of highdimensional sparse contingency tables. Key Words: contingency tables; density estimation; Dirichlet process mixture model; multivariate
Dynamic density estimation with diffusive Dirichlet mixtures
, 2014
"... We introduce a new class of nonparametric prior distributions on the space of continuously varying densities, induced by Dirichlet process mixtures which diffuse in time. These select timeindexed random functions without jumps, whose sections are continuous or discrete distributions depending on th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We introduce a new class of nonparametric prior distributions on the space of continuously varying densities, induced by Dirichlet process mixtures which diffuse in time. These select timeindexed random functions without jumps, whose sections are continuous or discrete distributions depending on the choice of kernel. The construction exploits the widely used stickbreaking representation of the Dirichlet process and induces the time dependence by replacing the stickbreaking components with onedimensional Wright–Fisher diffusions. These features combine appealing properties of the model, inherited from the Wright– Fisher diffusions and the Dirichlet mixture structure, with great flexibility and tractability for posterior computation. The construction can be easily extended to multiparameter GEM marginal states, which include for example the Pitman–Yor process. A full inferential strategy is detailed and illustrated on simulated and real data.
Reduced rank regression in Bayesian FDA
, 2010
"... In functional data analysis (FDA) it is of interest to generalize techniques of multivariate analysis like canonical correlation analysis or regression to functions which are often observed with noise. In the proposed Bayesian approach to FDA two tools are combined: (i) a special DemmlerReinsch lik ..."
Abstract
 Add to MetaCart
In functional data analysis (FDA) it is of interest to generalize techniques of multivariate analysis like canonical correlation analysis or regression to functions which are often observed with noise. In the proposed Bayesian approach to FDA two tools are combined: (i) a special DemmlerReinsch like basis of interpolation splines to represent functions parsimoniously and ‡exibly; (ii) latent variable models for probabilistic principal components analysis or canonical correlation analysis of the corresponding coe ¢ cients. In this way partial curves and nonGaussian measurement error schemes can be handled. Bayesian inference is based on a variational algorithm such that computations are straight forward and fast corresponding to an idea of FDA as a toolbox for explorative data analysis. The performance of the approach is illustrated with synthetic and real data sets. As detailed in the table of contents the paper has a “vertical ” structure corresponding to topics in data analysis and de…ning the sequence of chapters and a “horizontal ” structure referring to the most important special cases of the proposed model: FCCA, functional regression, scalar prediction, classi…cation. Within chapters the special cases are addressed in turn such that a reader interested only in a special application of the model may skip the other sections.
doi:http://dx.doi.org/10.5705/ss.2011.172 MULTIVARIATE SPATIAL NONPARAMETRIC MODELLING VIA KERNEL PROCESSES MIXING
"... Abstract: In this paper we develop a nonparametric multivariate spatial model that avoids specifying a Gaussian distribution for spatial random effects. Our nonparametric model extends the stickbreaking (SB) prior of Sethuraman (1994), that is frequently used in Bayesian modelling to capture uncer ..."
Abstract
 Add to MetaCart
Abstract: In this paper we develop a nonparametric multivariate spatial model that avoids specifying a Gaussian distribution for spatial random effects. Our nonparametric model extends the stickbreaking (SB) prior of Sethuraman (1994), that is frequently used in Bayesian modelling to capture uncertainty in the parametric form of an outcome. The stickbreaking prior is extended here to the spatial setting by assigning each location a different, unknown distribution, and smoothing the distributions in space with a series of spacedependent kernel functions that have a spacevarying bandwidth parameter. This results in a flexible nonstationary spatial model, as different kernel functions lead to different relationships between the distributions at nearby locations. This approach is the first to allow both the probabilities and the point mass values of the SB prior to depend on space. Thus, there is no need for replications and we obtain a continuous process in the limit. We extend the model to the multivariate setting by having, for each process, a different kernel function, but sharing the location of the kernel knots across the different processes. The resulting covariance for the multivariate process is in general nonstationary and nonseparable. The modelling framework proposed here is also computationally efficient because it avoids inverting large matrices and calculating determinants. We study the theoretical properties of the proposed multivariate spatial process. The methods are illustrated using simulated examples and an air pollution application to model components of fine particulate matter.