Results 1 
4 of
4
Latent dirichlet allocation
 Journal of Machine Learning Research
, 2003
"... We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a threelevel hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, ..."
Abstract

Cited by 2350 (63 self)
 Add to MetaCart
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a threelevel hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. 1.
Stochastic Plans for Robotic Manipulation
, 1990
"... Geometric uncertainty is unavoidable when programming robots for physical applications. We propose a stochastic framework for manipulation planning where plans are ranked on the basis of expected cost. That is, we express the desirability of states and actions with a cost function and describe uncer ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
Geometric uncertainty is unavoidable when programming robots for physical applications. We propose a stochastic framework for manipulation planning where plans are ranked on the basis of expected cost. That is, we express the desirability of states and actions with a cost function and describe uncertainty with probability distributions. We illustrate the approach with a new design for a programmable parts feeder, a mechanism that orients twodimensional parts using a sequence of openloop mechanical motions. We present a planning algorithm that accepts an nsided polygonal part as input and, in time O(n²), generates a stochastically optimal plan for orienting the part.
Means of a Dirichlet process and multiple hypergeometric functions
 Ann. Probab
, 2004
"... The Lauricella theory of multiple hypergeometric functions is used to shed some light on certain distributional properties of the mean of a Dirichlet process. This approach leads to several results, which are illustrated here. Among these are a new and more direct procedure for determining the exact ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The Lauricella theory of multiple hypergeometric functions is used to shed some light on certain distributional properties of the mean of a Dirichlet process. This approach leads to several results, which are illustrated here. Among these are a new and more direct procedure for determining the exact form of the distribution of the mean, a correspondence between the distribution of the mean and the parameter of a Dirichlet process, a characterization of the family of Cauchy distributions as the set of the fixed points of this correspondence, and an extension of the Markov–Krein identity. Moreover, an expression of the characteristic function of the mean of a Dirichlet process is obtained by resorting to an integral representation of a confluent form of the fourth Lauricella function. This expression is then employed to prove that the distribution of the mean of a Dirichlet process is symmetric if and only if the parameter of the process is symmetric, and to provide a new expression of the moment generating function of the variance of a Dirichlet process. 1. Introduction. The
Eliciting Information from a Large Population
, 2011
"... An uninformed welfare maximizing decision maker communicates with a random sample of agents from a large population who have heterogeneous preferences. Every agent’s preference is private information to himself. The population distribution of preferences is unknown and messages from the sampled agen ..."
Abstract
 Add to MetaCart
An uninformed welfare maximizing decision maker communicates with a random sample of agents from a large population who have heterogeneous preferences. Every agent’s preference is private information to himself. The population distribution of preferences is unknown and messages from the sampled agents are cheap talk. This paper o¤ers a tractable model of communication where the decision maker estimates the distribution of preferences through the messages, and chooses the policy to maximize welfare of the entire population. Information does not aggregate e ¢ ciently even if the sample size becomes arbitrarily large, since the sampled agents have incentive to "exaggerate " their preferences especially as the sample size becomes larger and each sampled agent has weaker in‡uence on the decision. The quality of communication with each sampled agent may improve as the sample size becomes smaller, and thus we identify the tradeo ¤ between the quality and quantity of communication. We show that, given the same expected prior distribution of population preferences, the decision maker may prefer to sample a smaller number of agents when the prior is weaker.