Results 1 - 10
of
570
Bayesian entropy estimation for countable discrete distributions
- CoRR
, 2013
"... We consider the problem of estimating Shannon’s entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite. The Pitman-Yor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably infini ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We consider the problem of estimating Shannon’s entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite. The Pitman-Yor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably
Stochastic Tracking of 3D Human Figures Using 2D Image Motion
- In European Conference on Computer Vision
, 2000
"... . A probabilistic method for tracking 3D articulated human gures in monocular image sequences is presented. Within a Bayesian framework, we de ne a generative model of image appearance, a robust likelihood function based on image graylevel dierences, and a prior probability distribution over pose an ..."
Abstract
-
Cited by 383 (33 self)
- Add to MetaCart
and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle ltering. The approach extends previous work on parameterized optical ow estimation to exploit a complex 3D
Bayesian inference, entropy, and the multinomial distribution
, 2007
"... Instead of maximum-likelihood or MAP, Bayesian inference encourages the use of predictive densities and evidence scores. This is illustrated in the context of the multinomial distribution, where predictive estimates are often used but rarely described as Bayesian. By using an entropy approximation t ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
Instead of maximum-likelihood or MAP, Bayesian inference encourages the use of predictive densities and evidence scores. This is illustrated in the context of the multinomial distribution, where predictive estimates are often used but rarely described as Bayesian. By using an entropy approximation
Bayesian and Quasi-Bayesian Estimators for Mutual Information from Discrete Data
, 2013
"... entropy ..."
Estimating Renyi Entropy of Discrete Distributions
"... It was recently shown that estimating the Shannon entropy H(p) of a discrete k-symbol distribution p requires Θ(k / log k) samples, a number that grows near-linearly in the support size. In many applications H(p) can be replaced by the more general Rényi entropy of order α, Hα(p). We determine the ..."
Abstract
- Add to MetaCart
It was recently shown that estimating the Shannon entropy H(p) of a discrete k-symbol distribution p requires Θ(k / log k) samples, a number that grows near-linearly in the support size. In many applications H(p) can be replaced by the more general Rényi entropy of order α, Hα(p). We determine
Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithms
, 2001
"... ABSTRACT: Suppose P is an arbitrary discrete distribution on a countable alphabet . Given an i.i.d. sample X1 Xn drawn from P, we consider the problem of estimat-ing the entropy HP or some other functional F = FP of the unknown distribution P. We show that, for additive functionals satisfy ..."
Abstract
-
Cited by 59 (3 self)
- Add to MetaCart
ABSTRACT: Suppose P is an arbitrary discrete distribution on a countable alphabet . Given an i.i.d. sample X1 Xn drawn from P, we consider the problem of estimat-ing the entropy HP or some other functional F = FP of the unknown distribution P. We show that, for additive functionals
Consistency Theorems for Discrete Bayesian Learning
"... Bayes ’ rule specifies how to obtain a posterior from a class of hypotheses endowed with a prior and the observed data. There are three fundamental ways to use this posterior for predicting the future: marginalization (integration over the hypotheses w.r.t. the posterior), MAP (taking the a posterio ..."
Abstract
- Add to MetaCart
posteriori most probable hypothesis), and stochastic model selection (selecting a hypothesis at random according to the posterior distribution). If the hypothesis class is countable and contains the data generating distribution (this is termed the “realizable case”), strong consistency theorems are known
A Bayesian framework for reinforcement learning
- In Proceedings of the Seventeenth International Conference on Machine Learning
, 2000
"... The reinforcement learning problem can be decomposed into two parallel types of inference: (i) estimating the parameters of a model for the underlying process; (ii) determining behavior which maximizes return under the estimated model. Following Dearden, Friedman and Andre (1999), it is proposed tha ..."
Abstract
-
Cited by 109 (1 self)
- Add to MetaCart
that the learning process estimates online the full posterior distribution over models. To determine behavior, a hypothesis is sampled from this distribution and the greedy policy with respect to the hypothesis is obtained by dynamic programming. By using a different hypothesis for each trial appropriate
Parameter estimation for text analysis
, 2004
"... Abstract. Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. ..."
Abstract
-
Cited by 119 (0 self)
- Add to MetaCart
Abstract. Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed
Efficient Bayesian Parameter Estimation in Large Discrete Domains
- Advances in Neural Information Processing Systems
, 1999
"... In this paper we examine the problem of estimating the parameters of a multinomial distribution over a large number of discrete outcomes, most of which do not appear in the training data. We analyze this problem from a Bayesian perspective and develop a hierarchical prior that incorporates the assum ..."
Abstract
-
Cited by 38 (1 self)
- Add to MetaCart
In this paper we examine the problem of estimating the parameters of a multinomial distribution over a large number of discrete outcomes, most of which do not appear in the training data. We analyze this problem from a Bayesian perspective and develop a hierarchical prior that incorporates
Results 1 - 10
of
570