Results 1  10
of
140,329
Bayesian entropy estimation for countable discrete distributions
 CoRR
, 2013
"... We consider the problem of estimating Shannon’s entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite. The PitmanYor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably infini ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the problem of estimating Shannon’s entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite. The PitmanYor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably
Stochastic Tracking of 3D Human Figures Using 2D Image Motion
 In European Conference on Computer Vision
, 2000
"... . A probabilistic method for tracking 3D articulated human gures in monocular image sequences is presented. Within a Bayesian framework, we de ne a generative model of image appearance, a robust likelihood function based on image graylevel dierences, and a prior probability distribution over pose an ..."
Abstract

Cited by 383 (33 self)
 Add to MetaCart
and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle ltering. The approach extends previous work on parameterized optical ow estimation to exploit a complex 3D
Bayesian and QuasiBayesian Estimators for Mutual Information from Discrete Data
, 2013
"... entropy ..."
Bayesian entropy for . . .
"... We develop spatial statistical methodology to design largescale air pollution monitoring networks with good predictive capabilities while minimizing the cost of monitoring. The underlying complexity of atmospheric processes and the urgent need to give credible assessments of environmental risk crea ..."
Abstract
 Add to MetaCart
. However, it is not uncommon to find spatial data that shows strong signs of nonstationary behavior. We build upon an existing approach that creates a nonstationary covariance by a mixture of a family of stationary processes, and a propose a Bayesian method of estimating the associated parameters using
Bayesian inference, entropy, and the multinomial distribution
, 2007
"... Instead of maximumlikelihood or MAP, Bayesian inference encourages the use of predictive densities and evidence scores. This is illustrated in the context of the multinomial distribution, where predictive estimates are often used but rarely described as Bayesian. By using an entropy approximation t ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Instead of maximumlikelihood or MAP, Bayesian inference encourages the use of predictive densities and evidence scores. This is illustrated in the context of the multinomial distribution, where predictive estimates are often used but rarely described as Bayesian. By using an entropy approximation
Generalized entropies through Bayesian estimation
, 1998
"... opy H (R) q from observed data. The Bayesian estimator yields the smallest meansquared deviation from the true parameter as compared with any other estimator. We compare the Bayesian entropy estimators with the frequencycount estimators of H (T) q and H (R) q . Numerical simulations reveal t ..."
Abstract
 Add to MetaCart
opy H (R) q from observed data. The Bayesian estimator yields the smallest meansquared deviation from the true parameter as compared with any other estimator. We compare the Bayesian entropy estimators with the frequencycount estimators of H (T) q and H (R) q . Numerical simulations reveal
Consistency Theorems for Discrete Bayesian Learning
"... Bayes ’ rule specifies how to obtain a posterior from a class of hypotheses endowed with a prior and the observed data. There are three fundamental ways to use this posterior for predicting the future: marginalization (integration over the hypotheses w.r.t. the posterior), MAP (taking the a posterio ..."
Abstract
 Add to MetaCart
posteriori most probable hypothesis), and stochastic model selection (selecting a hypothesis at random according to the posterior distribution). If the hypothesis class is countable and contains the data generating distribution (this is termed the “realizable case”), strong consistency theorems are known
Bayesian Invariant Measurements of Generalisation for Discrete Distributions
, 1995
"... Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences
The Countably Infinite Bayesian Gaussian Mixture Density Model
, 1999
"... In a Bayesian mixture model, there is no need a priori to restrict the number of components to be finite. Infinite mixture models sidestep the problem of finding the "correct" number of components, and may be handled using a finite amount of computation. In this paper it is demonstrated ho ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
how inference may be done in infinite mixture models using a Markov Chain whose implementation relies entirely on Gibbs sampling. An example is given of application to multivariate density estimation. 1 Introduction Complex distributions may be modelled as a mixture of simpler distributions both
entropy
, 2008
"... www.mdpi.org/entropy/ Relaxed plasma equilibria and entropyrelated plasma selforganization principles ..."
Abstract
 Add to MetaCart
www.mdpi.org/entropy/ Relaxed plasma equilibria and entropyrelated plasma selforganization principles
Results 1  10
of
140,329