Results 1  10
of
49
Online Variational Inference for the Hierarchical Dirichlet Process
"... The hierarchical Dirichlet process (HDP) is a Bayesian nonparametric model that can be used to model mixedmembership data with a potentially infinite number of components. It has been applied widely in probabilistic topic modeling, where the data are documents and the components are distributions o ..."
Abstract

Cited by 62 (7 self)
 Add to MetaCart
(Show Context)
The hierarchical Dirichlet process (HDP) is a Bayesian nonparametric model that can be used to model mixedmembership data with a potentially infinite number of components. It has been applied widely in probabilistic topic modeling, where the data are documents and the components are distributions of terms that reflect recurring patterns (or “topics”) in the collection. Given a document collection, posterior inference is used to determine the number of topics needed and to characterize their distributions. One limitation of HDP analysis is that existing posterior inference algorithms require multiple passes through all the data—these algorithms are intractable for very large scale applications. We propose an online variational inference algorithm for the HDP, an algorithm that is easily applicable to massive and streaming data. Our algorithm is significantly faster than traditional inference algorithms for the HDP, and lets us analyze much larger data sets. We illustrate the approach on two large collections of text, showing improved performance over online LDA, the finite counterpart to the HDP topic model. 1
TreeStructured Stick Breaking for Hierarchical Data
"... Many data are naturally modeled by an unobserved hierarchical structure. In this paper we propose a flexible nonparametric prior over unknown data hierarchies. The approach uses nested stickbreaking processes to allow for trees of unbounded width and depth, where data can live at any node and are i ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
(Show Context)
Many data are naturally modeled by an unobserved hierarchical structure. In this paper we propose a flexible nonparametric prior over unknown data hierarchies. The approach uses nested stickbreaking processes to allow for trees of unbounded width and depth, where data can live at any node and are infinitely exchangeable. One can view our model as providing infinite mixtures where the components have a dependency structure corresponding to an evolutionary diffusion down a tree. By using a stickbreaking approach, we can apply Markov chain Monte Carlo methods based on slice sampling to perform Bayesian inference and simulate from the posterior distribution on trees. We apply our method to hierarchical clustering of images and topic modeling of text data. 1
The infinite tree
 In Association for Computational Linguistics (ACL
, 2007
"... Historically, unsupervised learning techniques have lacked a principled technique for selecting the number of unseen components. Research into nonparametric priors, such as the Dirichlet process, has enabled instead the use of infinite models, in which the number of hidden categories is not fixed, ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Historically, unsupervised learning techniques have lacked a principled technique for selecting the number of unseen components. Research into nonparametric priors, such as the Dirichlet process, has enabled instead the use of infinite models, in which the number of hidden categories is not fixed, but can grow with the amount of training data. Here we develop the infinite tree, a new infinite model capable of representing recursive branching structure over an arbitrarily large set of hidden categories. Specifically, we develop three infinite tree models, each of which enforces different independence assumptions, and for each model we define a simple direct assignment sampling inference procedure. We demonstrate the utility of our models by doing unsupervised learning of partofspeech tags from treebank dependency skeleton structure, achieving an accuracy of 75.34%, and by doing unsupervised splitting of partofspeech tags, which increases the accuracy of a generative dependency parser from 85.11 % to 87.35%. 1
Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process
"... We present a nonparametric hierarchical Bayesian model of document collections that decouples sparsity and smoothness in the component distributions (i.e., the “topics”). In the sparse topic model (sparseTM), each topic is represented by a bank of selector variables that determine which terms appear ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
We present a nonparametric hierarchical Bayesian model of document collections that decouples sparsity and smoothness in the component distributions (i.e., the “topics”). In the sparse topic model (sparseTM), each topic is represented by a bank of selector variables that determine which terms appear in the topic. Thus each topic is associated with a subset of the vocabulary, and topic smoothness is modeled on this subset. We develop an efficient Gibbs sampler for the sparseTM that includes a generalpurpose method for sampling from a Dirichlet mixture with a combinatorial number of components. We demonstrate the sparseTM on four realworld datasets. Compared to traditional approaches, the empirical results will show that sparseTMs give better predictive performance with simpler inferred models. 1
The branching process with logistic growth
 Ann. Appl. Probab
, 2005
"... In order to model random densitydependence in population dynamics, we construct the random analogue of the wellknown logistic process in the branching process ’ framework. This densitydependence corresponds to intraspecific competition pressure, which is ubiquitous in ecology, and translates mathe ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
(Show Context)
In order to model random densitydependence in population dynamics, we construct the random analogue of the wellknown logistic process in the branching process ’ framework. This densitydependence corresponds to intraspecific competition pressure, which is ubiquitous in ecology, and translates mathematically into a quadratic death rate. The logistic branching process, or LBprocess, can thus be seen as (the mass of) a fragmentation process (corresponding to the branching mechanism) combined with constant coagulation rate (the death rate is proportional to the number of possible coalescing pairs). In the continuous statespace setting, the LBprocess is a timechanged (in Lamperti’s fashion) Ornstein–Uhlenbeck type process. We obtain similar results for both constructions: when natural deaths do not occur, the LBprocess converges to a specified distribution; otherwise, it goes extinct a.s. In the latter case, we provide the expectation and the Laplace transform of the absorption time,
A phase transition in the random transposition random walk
 Pages 1726 in Banderier and Krattenthaler (2003) Bollobás, B
, 2003
"... Our work is motivated by Bourque and Pevzner’s (2002) simulation study of the effectiveness of the parsimony method in studying genome rearrangement, and leads to a surprising result about the random transposition walk on the group of permutations on n elements. Consider this walk in continuous time ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
Our work is motivated by Bourque and Pevzner’s (2002) simulation study of the effectiveness of the parsimony method in studying genome rearrangement, and leads to a surprising result about the random transposition walk on the group of permutations on n elements. Consider this walk in continuous time starting at the identity and let Dt be the minimum number of transpositions needed to go back to the identity from the location at time t. Dt undergoes a phase transition: the distance D cn/2 ∼ u(c)n, where u is an explicit function satisfying u(c) =c/2 for c ≤ 1 and u(c) <c/2 for c>1. In addition, we describe the fluctuations of D cn/2 about its mean in each of the three regimes (subcritical, critical and supercritical). The techniques used involve viewing the cycles in the random permutation as a coagulationfragmentation process and relating the behavior to the ErdősRenyi random graph model.
The PoissonDirichlet law is the unique invariant distribution for uniform SplitMerge Transformations
 ANN. PROBAB
, 2004
"... We consider a Markov chain on the space of (countable) partitions of the interval [0, 1], obtained first by size biased sampling twice (allowing repetitions) and then merging the parts (if the sampled parts are distinct) or splitting the part uniformly (if the same part was sampled twice). We prov ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
We consider a Markov chain on the space of (countable) partitions of the interval [0, 1], obtained first by size biased sampling twice (allowing repetitions) and then merging the parts (if the sampled parts are distinct) or splitting the part uniformly (if the same part was sampled twice). We prove a conjecture of Vershik stating that the PoissonDirichlet law with parameter &theta; = 1 is the unique invariant distribution for this Markov chain. Our proof uses a combination of probabilistic, combinatoric, and representationtheoretic arguments.
Exchangeable fragmentationcoalescence processes and their equilibrium distribution
 Electr. J. Prob
, 2004
"... We define and study a family of Markov processes with state space the compact set of all partitions of N that we call exchangeable fragmentationcoalescence processes. They can be viewed as a combination of exchangeable fragmentation as defined by Bertoin and of homogenous coalescence as defined by ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We define and study a family of Markov processes with state space the compact set of all partitions of N that we call exchangeable fragmentationcoalescence processes. They can be viewed as a combination of exchangeable fragmentation as defined by Bertoin and of homogenous coalescence as defined by Pitman and Schweinsberg or Möhle and Sagitov. We show that they admit a unique invariant probability measure and we study some properties of their paths and of their equilibrium measure. Key words. Fragmentation, coalescence, invariant distribution. A.M.S. Classification. 60 J 25, 60 G 09. 1
Asymptotics of certain coagulationfragmentation processes and invariant PoissonDirichlet measures
, 2001
"... Abstract. We consider Markov chains on the space of (countable) partitions of the interval [0, 1], obtained first by size biased sampling twice (allowing repetitions) and then merging the parts with probability βm (if the sampled parts are distinct) or splitting the part with probability βs accordin ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider Markov chains on the space of (countable) partitions of the interval [0, 1], obtained first by size biased sampling twice (allowing repetitions) and then merging the parts with probability βm (if the sampled parts are distinct) or splitting the part with probability βs according to a law σ (if the same part was sampled twice). We characterize invariant probability measures for such chains. In particular, if σ is the uniform measure then the PoissonDirichlet law is an invariant probability measure, and it is unique within a suitably defined class of “analytic ” invariant measures. We also derive transience and recurrence criteria for these chains.