Results 1 
5 of
5
Structure Learning in Conditional Probability Models via an Entropic Prior and Parameter Extinction
, 1998
"... We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum... ..."
Abstract

Cited by 78 (0 self)
 Add to MetaCart
We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum...
Entropic Priors for Discrete Probabilistic Networks and for Mixtures of Gaussian Models
 in Bayesian Inference and Maximum Entropy Methods
, 2002
"... The ongoing unprecedented exponential explosion of available computing power, has radically transformed the methods of statistical inference. What used to be a small minority of statisticians advocating for the use of priors and a strict adherence to bayes theorem, it is now becoming the norm across ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
The ongoing unprecedented exponential explosion of available computing power, has radically transformed the methods of statistical inference. What used to be a small minority of statisticians advocating for the use of priors and a strict adherence to bayes theorem, it is now becoming the norm across disciplines. The evolutionary direction is now clear. The trend is towards more realistic, flexible and complex likelihoods characterized by an ever increasing number of parameters. This makes the old question of: What should the prior be? to acquire a new central importance in the modern bayesian theory of inference. Entropic priors provide one answer to the problem of prior selection. The general definition of an entropic prior has existed since 1988 [1], but it was not until 1998 [2] that it was found that they provide a new notion of complete ignorance. This paper reintroduces the family of entropic priors as minimizers of mutual information between the data and the parameters, as in [2], but with a small change and a correction. The general formalism is then applied to two large classes of models: Discrete probabilistic networks and univariate finite mixtures of gaussians. It is also shown how to perform inference by e#ciently sampling the corresponding posterior distributions.
Are We Cruising a Hypothesis Space
 in the Proceedings of Maximum Entropy and Bayesian Methods
, 1998
"... ..."
(Show Context)
The Geometry and Dynamics of DataDriven Modeling
"... The fundamental problem in science is that of using measurements and observations to draw conclusions that at first appear to be hidden from the observer. The task of turning raw measurements into useful conclusions is exactly the task underlying data driven modeling. The full task involves the abil ..."
Abstract
 Add to MetaCart
The fundamental problem in science is that of using measurements and observations to draw conclusions that at first appear to be hidden from the observer. The task of turning raw measurements into useful conclusions is exactly the task underlying data driven modeling. The full task involves the ability to turn various assumptions into useful constraints which allow the data to be converted into conclusions. I plan to use geometric methods and dynamical systems concepts to build pieces of a general data driven modeling program and use these to solve scientific problems of current interest. [Note: throughout this document, `geometric methods' is used in a wide sense to cover the methods of differential geometry, differential topology, topology (algebraic and point set), etc.] Contents 1 Preamble 3 2 The Data Driven Modeling Approach: Sources, Prior Information, and Viewpoint 3 3 The Role of Assumptions and Constraints 5 3.1 Assumptions and Constraints . . . . . . . . . . . . . . . . ...
unknown title
"... Structure learning in conditional probability models via an entropic prior and parameter extinction ..."
Abstract
 Add to MetaCart
(Show Context)
Structure learning in conditional probability models via an entropic prior and parameter extinction