• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 59
Next 10 →

unknown title

by C. J. C. Burges, L. Bottou, K. Q. Weinberger, Evan Archer, Il Memming Park, Jonathan W. Pillow
"... Bayesian estimation of discrete entropy with mixtures of stick-breaking priors ..."
Abstract - Add to MetaCart
Bayesian estimation of discrete entropy with mixtures of stick-breaking priors

Bayesian entropy estimation for countable discrete distributions

by Evan Archer, Il Memming Park, Jonathan W. Pillow - CoRR , 2013
"... We consider the problem of estimating Shannon’s entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite. The Pitman-Yor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably infini ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
infinite discrete distributions, and has found major applications in Bayesian non-parametric statistics and machine learning. Here we show that it provides a natural family of priors for Bayesian entropy estimation, due to the fact that moments of the induced posterior distribution over H can be computed

Bayesian entropy estimation for binary spike train data using parametric prior knowledge

by Evan Archer, Il Memming Park, Jonathan W. Pillow - In Advances in Neural Information Processing Systems (NIPS , 2013
"... Shannon’s entropy is a basic quantity in information theory, and a fundamental building block for the analysis of neural codes. Estimating the entropy of a dis-crete distribution from samples is an important and difficult problem that has re-ceived considerable attention in statistics and theoretica ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
allocation of prior probability mass in cases where spikes are sparse. Here we develop Bayesian estimators for the entropy of binary spike trains using priors designed to flexibly exploit the statistical structure of simultaneously-recorded spike responses. We define two prior distributions over spike words

Estimating Priors In Maximum Entropy Image Processing

by A. Mohammad-Djafari, G. Demoment - in Proceedings of IEEE ICASSP , 1990
"... this paper we first propose a brief description of the Maximum a postertori (MAP) Bayesian approach with Maximum Entropy (ME} priors to solve the linear system of equations which is obtained after the discretization of the integral equations which arises in various tomographlc image restoration and ..."
Abstract - Cited by 6 (6 self) - Add to MetaCart
this paper we first propose a brief description of the Maximum a postertori (MAP) Bayesian approach with Maximum Entropy (ME} priors to solve the linear system of equations which is obtained after the discretization of the integral equations which arises in various tomographlc image restoration

BAYESIAN ANALYSIS OF BURR TYPE XI DISTRIBUTION UNDER SINGLE AND MIXTURE OF PRIORS

by Navid Feroze, Muhammad Aslam
"... In this paper, the Bayes estimation of the parameter of Burr type XI distribution has been considered. The posterior analysis is carried out under the assumption of eight priors (informative, non-informative, single and mixture of priors). The entropy and precautionary loss functions have been used ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
In this paper, the Bayes estimation of the parameter of Burr type XI distribution has been considered. The posterior analysis is carried out under the assumption of eight priors (informative, non-informative, single and mixture of priors). The entropy and precautionary loss functions have been used

Collapsed variational Dirichlet process mixture models

by Kenichi Kurihara - Twentieth International Joint Conference on Artificial Intelligence (IJCAI07 , 2007
"... Nonparametric Bayesian mixture models, in particular Dirichlet process (DP) mixture models, have shown great promise for density estimation and data clustering. Given the size of today’s datasets, computational efficiency becomes an essential ingredient in the applicability of these techniques to re ..."
Abstract - Cited by 49 (1 self) - Add to MetaCart
approximation where mixture weights are marginalized out. For both VB approximations we consider two different ways to approximate the DP, by truncating the stick-breaking construction, and by using a finite mixture model with a symmetric Dirichlet prior. 1

L1-Consistency of Dirichlet mixtures in multivariate Bayesian density estimation,” , submitted

by Yuefeng Wu, Subhashis Ghosal B , 2009
"... Density estimation, especially multivariate density estimation, is a fundamental problem in nonparametric inference. Dirichlet mixture priors are often used in practice for such problem. However, asymptotic properties of such priors have only been studied in the univariate case. We extend L1-consist ..."
Abstract - Cited by 11 (1 self) - Add to MetaCart
Density estimation, especially multivariate density estimation, is a fundamental problem in nonparametric inference. Dirichlet mixture priors are often used in practice for such problem. However, asymptotic properties of such priors have only been studied in the univariate case. We extend L1

Entropy and Inference, Revisited

by Ilya Nemenman, Fariel Shafee, William Bialek , 2002
"... We study properties of popular near--uniform (Dirichlet) priors for learning undersampled probability distributions on discrete nonmetric spaces and show that they lead to disastrous results. However, an Occam--style phase space argument expands the priors into their infinite mixture and resolves mo ..."
Abstract - Cited by 29 (1 self) - Add to MetaCart
We study properties of popular near--uniform (Dirichlet) priors for learning undersampled probability distributions on discrete nonmetric spaces and show that they lead to disastrous results. However, an Occam--style phase space argument expands the priors into their infinite mixture and resolves

A bayesian approach for blind separation of sparse sources

by C. Févotte, S. J. Godsill - IEEE Transactions on Speech and Audio Processing , 2005
"... We present a Bayesian approach for blind separation of linear instantaneous mixtures of sources having a sparse representation in a given basis. The distributions of the coefficients of the sources in the basis are modeled by a Student t distribution, which can be expressed as a Scale Mixture of Gau ..."
Abstract - Cited by 66 (10 self) - Add to MetaCart
using a Modified Discrete Cosine Transfrom basis and compared with a finite mixture of Gaussians prior approach. These results show the improved sound quality obtained with the Student t prior and the better robustness to mixing matrices close to singularity of the Markov Chains Monte Carlo approach.

Single camera pose estimation using Bayesian filtering and Kinect motion priors

by Michael Burke, Joan Lasenby , 2014
"... Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints ..."
Abstract - Add to MetaCart
, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave
Next 10 →
Results 1 - 10 of 59
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University