Results 11  20
of
218
Limits of normalized quadrangulations. The Brownian map
 Ann. Probab
, 2004
"... Consider qn a random pointed quadrangulation chosen equally likely among the pointed quadrangulations with n faces. In this paper, we show that, when n goes to +∞, qn suitably normalized converges weakly in a certain sense to a random limit object, which is continuous and compact, and that we name t ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Consider qn a random pointed quadrangulation chosen equally likely among the pointed quadrangulations with n faces. In this paper, we show that, when n goes to +∞, qn suitably normalized converges weakly in a certain sense to a random limit object, which is continuous and compact, and that we name the Brownian map. The same result is shown for a model of rooted quadrangulations and for some models of rooted quadrangulations with random edge lengths. A metric space of rooted (resp. pointed) abstract maps that contains the model of discrete rooted (resp. pointed) quadrangulations and the model of Brownian map is defined. The weak convergences hold in these metric spaces. 1
Invariance principles for random bipartite planar maps
 ANN. PROBAB
, 2007
"... Random planar maps are considered in the physics literature as the discrete counterpart of random surfaces. It is conjectured that properly rescaled random planar maps, when conditioned to have a large number of faces, should converge to a limiting surface whose law does not depend, up to scaling fa ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
(Show Context)
Random planar maps are considered in the physics literature as the discrete counterpart of random surfaces. It is conjectured that properly rescaled random planar maps, when conditioned to have a large number of faces, should converge to a limiting surface whose law does not depend, up to scaling factors, on details of the class of maps that are sampled. Previous works on the topic, starting with Chassaing and Schaeffer, have shown that the radius of a random quadrangulation with n faces, that is, the maximal graph distance on such a quadrangulation to a fixed reference point, converges in distribution once rescaled by n 1/4 to the diameter of the Brownian snake, up to a scaling constant. Using a bijection due to Bouttier, Di Francesco and Guitter between bipartite planar maps and a family of labeled trees, we show the corresponding invariance principle for a class of random maps that follow a Boltzmann distribution putting weight qk on faces of degree 2k: the radius of such maps, conditioned to have n faces (or n vertices) and under a criticality assumption, converges in distribution once rescaled by n 1/4 to a scaled version of the diameter of the Brownian snake. Convergence results for the socalled profile of maps are also provided. The convergence of rescaled bipartite maps to the Brownian map, in the sense introduced by Marckert and Mokkadem, is also shown. The proofs of these results rely on a new invariance principle for twotype spatial Galton–Watson trees.
Bayesian nonparametric estimator derived from conditional Gibbs structures. Annals of Applied Probability
 J. Phys. A: Math. Gen
, 2008
"... We consider discrete nonparametric priors which induce Gibbstype exchangeable random partitions and investigate their posterior behavior in detail. In particular, we deduce conditional distributions and the corresponding Bayesian nonparametric estimators, which can be readily exploited for predictin ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
(Show Context)
We consider discrete nonparametric priors which induce Gibbstype exchangeable random partitions and investigate their posterior behavior in detail. In particular, we deduce conditional distributions and the corresponding Bayesian nonparametric estimators, which can be readily exploited for predicting various features of additional samples. The results provide useful tools for genomic applications where prediction of future outcomes is required. 1. Introduction. Random
A bayesian interpretation of interpolated kneserney
, 2006
"... Interpolated KneserNey is one of the best smoothing methods for ngram language models. Previous explanations for its superiority have been based on intuitive and empirical justifications of specific properties of the method. We propose a novel interpretation of interpolated KneserNey as approxima ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
Interpolated KneserNey is one of the best smoothing methods for ngram language models. Previous explanations for its superiority have been based on intuitive and empirical justifications of specific properties of the method. We propose a novel interpretation of interpolated KneserNey as approximate inference in a hierarchical Bayesian model consisting of PitmanYor processes. As opposed to past explanations, our interpretation can recover exactly the formulation of interpolated KneserNey, and performs better than interpolated KneserNey when a better inference procedure is used. 1
The structure of the allelic partition of the total population for GaltonWatson processes with neutral mutations
"... We consider a (sub)critical Galton–Watson process with neutral mutations (infinite alleles model), and decompose the entire population into clusters of individuals carrying the same allele. We specify the law of this allelic partition in terms of the distribution of the number of clonechildren and ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
(Show Context)
We consider a (sub)critical Galton–Watson process with neutral mutations (infinite alleles model), and decompose the entire population into clusters of individuals carrying the same allele. We specify the law of this allelic partition in terms of the distribution of the number of clonechildren and the number of mutantchildren of a typical individual. The approach combines an extension of Harris representation of Galton–Watson processes and a version of the ballot theorem. Some limit theorems related to the distribution of the allelic partition are also given. 1. Introduction. We consider a Galton–Watson process, that is, a population model with asexual reproduction such that at every generation, each individual gives birth to a random number of children according to a fixed distribution and independently of the other individuals in the population. We are interested in the situation where a child can be either a clone, that
Asymptotic laws for compositions derived from transformed subordinators
 ANN. PROBAB
, 2006
"... A random composition of n appears when the points of a random closed set ˜ R ⊂ [0, 1] are used to separate into blocks n points sampled from the uniform distribution. We study the number of parts Kn of this composition and other related functionals under the assumption that ˜ R = φ(S•) where (St, t ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
(Show Context)
A random composition of n appears when the points of a random closed set ˜ R ⊂ [0, 1] are used to separate into blocks n points sampled from the uniform distribution. We study the number of parts Kn of this composition and other related functionals under the assumption that ˜ R = φ(S•) where (St, t ≥ 0) is a subordinator and φ: [0, ∞] → [0, 1] is a diffeomorphism. We derive the asymptotics of Kn when the Lévy measure of the subordinator is regularly varying at 0 with positive index. Specialising to the case of exponential function φ(x) = 1 −e −x we establish a connection between the asymptotics of Kn and the exponential functional of the subordinator.
Clustering Using Objective Functions and Stochastic Search
, 2007
"... Summary. A new approach to clustering multivariate data, based on a multilevel linear mixed model, is proposed. A key feature of the model is that observations from the same cluster are correlated, because they share clusterspecific random effects. The inclusion of clusterspecific random effects a ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
Summary. A new approach to clustering multivariate data, based on a multilevel linear mixed model, is proposed. A key feature of the model is that observations from the same cluster are correlated, because they share clusterspecific random effects. The inclusion of clusterspecific random effects allows parsimonious departure from an assumed base model for cluster mean profiles. This departure is captured statistically via the posterior expectation, or best linear unbiased predictor. One of the parameters in the model is the true underlying partition of the data, and the posterior distribution of this parameter, which is known up to a normalizing constant, is used to cluster the data. The problem of finding partitions with high posterior probability is not amenable to deterministic methods such as the EM algorithm. Thus, we propose a stochastic search algorithm that is driven by a Markov chain that is a mixture of two Metropolis–Hastings algorithms—one that makes small scale changes to individual objects and another that performs large scale moves involving entire clusters. The methodology proposed is fundamentally different from the wellknown finite mixture model approach to clustering, which does not explicitly include the partition as a parameter, and involves an independent and identically distributed structure.
SPINAL PARTITIONS AND INVARIANCE UNDER REROOTING OF CONTINUUM RANDOM TREES
, 2009
"... We develop some theory of spinal decompositions of discrete and continuous fragmentation trees. Specifically, we consider a coarse and a fine spinal integer partition derived from spinal tree decompositions. We prove that for a twoparameter Poisson–Dirichlet family of continuous fragmentation trees ..."
Abstract

Cited by 27 (13 self)
 Add to MetaCart
(Show Context)
We develop some theory of spinal decompositions of discrete and continuous fragmentation trees. Specifically, we consider a coarse and a fine spinal integer partition derived from spinal tree decompositions. We prove that for a twoparameter Poisson–Dirichlet family of continuous fragmentation trees, including the stable trees of Duquesne and Le Gall, the fine partition is obtained from the coarse one by shattering each of its parts independently, according to the same law. As a second application of spinal decompositions, we prove that among the continuous fragmentation trees, stable trees are the only ones whose distribution is invariant under uniform rerooting.
PoissonKingman Partitions
 of Lecture NotesMonograph Series
, 2002
"... This paper presents some general formulas for random partitions of a finite set derived by Kingman's model of random sampling from an interval partition generated by subintervals whose lengths are the points of a Poisson point process. These lengths can be also interpreted as the jumps of a sub ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
This paper presents some general formulas for random partitions of a finite set derived by Kingman's model of random sampling from an interval partition generated by subintervals whose lengths are the points of a Poisson point process. These lengths can be also interpreted as the jumps of a subordinator, that is an increasing process with stationary independent increments. Examples include the twoparameter family of PoissonDirichlet models derived from the Poisson process of jumps of a stable subordinator. Applications are made to the random partition generated by the lengths of excursions of a Brownian motion or Brownian bridge conditioned on its local time at zero.
An Infinite Latent Attribute Model for Network Data
 In Proceedings of the International Conference on Machine Learning (ICML
, 2012
"... Latent variable models for network data extract a summary of the relational structure underlying an observed network. The simplest possible models subdivide nodes of the network into clusters; the probability of a link between any two nodes then depends only on their cluster assignment. Currently av ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
(Show Context)
Latent variable models for network data extract a summary of the relational structure underlying an observed network. The simplest possible models subdivide nodes of the network into clusters; the probability of a link between any two nodes then depends only on their cluster assignment. Currently available models can be classified by whether clusters are disjoint or are allowed to overlap. These models can explain a “flat ” clustering structure. Hierarchical Bayesian models provide a natural approach to capture more complex dependencies. We propose a model in which objects are characterised by a latent feature vector. Each feature is itself partitioned into disjoint groups (subclusters), corresponding to a second layer of hierarchy. In experimental comparisons, the model achieves significantly improved predictive performance on social and biological link prediction tasks. The results indicate that models with a single layer hierarchy oversimplify real networks. 1.