Results 1  10
of
10
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. R ..."
Abstract

Cited by 594 (21 self)
 Add to MetaCart
Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The "Metropolis algorithm" has been used to solve difficult problems in statistical physics for over forty years, and, in the last few years, the related method of "Gibbs sampling" has been applied to problems of statistical inference. Concurrently, an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the "hybrid Monte Carlo" method. In computer science, Markov chain sampling is the basis of the heuristic optimization technique of "simulated annealing", and has recently been used in randomized algorithms for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilistic inference in expert systems, discovery of latent classes from data, and Bayesian learning for neural networks.
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
2010a Statistical inference after model selection
 Journal of Quantitative Criminology
"... Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence in ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a “final ” model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and postmodelselection sampling distributions are mixtures
A Bayesian Approach To Colour Term Semantics
 LINGSCENE
, 2001
"... A Bayesian computational model is described, which is able to account for the acquisition of the meanings of basic colour terms by children learning their first language. Examples of colours named by particular colour terms are stored in a conceptual colour space, and Bayesian inference is used to d ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Bayesian computational model is described, which is able to account for the acquisition of the meanings of basic colour terms by children learning their first language. Examples of colours named by particular colour terms are stored in a conceptual colour space, and Bayesian inference is used to determine the extent of the extension of each colour term based upon these examples. This method can be extended to create a fuzzy set based denotation for each colour term by calculating the probability that each point in the colour space comes within the extension of each colour term. The learned categories show the prototype structure characteristic of colour terms, with there being a single best example of the category, marginal members of the category, and with intermediate colours being members of the category to a greater or lesser extent. This approach has the advantage over previous approaches to colour term semantics of being both flexible enough to account for the full range of colour term systems seen in the world's languages, while at the same time providing a precise and explicit account of how children may accomplish the task of learning colour terms. Further experiments reveal that learning is successful even when as many as fifty percent of the colour term examples presented to the model are erroneous, demonstrating that the theory is robust, and can account for the acquisition of colour terms in realistic as well as idealised situations.
Scientific method, statistical method, and the speed of light. Working Paper 200002
, 2000
"... What is “statistical method”? Is it the same as “scientific method”? This paper answers the first question by specifying the elements and procedures common to all statistical investigations and organizing these into a single structure. This structure is illustrated by careful examination of the firs ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
What is “statistical method”? Is it the same as “scientific method”? This paper answers the first question by specifying the elements and procedures common to all statistical investigations and organizing these into a single structure. This structure is illustrated by careful examination of the first scientific study on the speed of light carried out by A.A. Michelson in 1879. Our answer to the second question is negative. To understand this a history on the speed of light up to the time of Michelson’s study is presented. The larger history and the details of a single study allow us to place the method of statistics within the larger context of science.
Colour Terms, Syntax and Bayes Modelling Acquisition and Evolution
, 2004
"... This thesis investigates language acquisition and evolution, using the methodologies of Bayesian inference and expressioninduction modelling, making specific reference to colour term typology, and syntactic acquisition. In order to test Berlin and Kay's (1969) hypothesis that the typologica ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis investigates language acquisition and evolution, using the methodologies of Bayesian inference and expressioninduction modelling, making specific reference to colour term typology, and syntactic acquisition. In order to test Berlin and Kay's (1969) hypothesis that the typological patterns observed in basic colour term systems are produced by a process of cultural evolution under the influence of universal aspects of human neurophysiology, an expressioninduction model was created. Ten artificial people were simulated, each of which was a computational agent. These people could learn colour term denotations by generalizing from examples using Bayesian inference, and the resulting denotations had the prototype properties characteristic of basic colour terms.
The pvalue, the Bayes/NeymanPearson Compromise and the Teaching of Statistical Inference in Introductory Business Statistics
"... Traditionally the NeymanPearson approach to hypothesis testing has been presented in introductory business statistics courses. However, many students as well as researchers find the decisions reached by this approach, i.e., reject/failtoreject, inconsistent with their understanding of the scienti ..."
Abstract
 Add to MetaCart
(Show Context)
Traditionally the NeymanPearson approach to hypothesis testing has been presented in introductory business statistics courses. However, many students as well as researchers find the decisions reached by this approach, i.e., reject/failtoreject, inconsistent with their understanding of the scientific process, namely accumulating evidence in support of a hypothesis. The proposed framework provides an easily understood rationale for introducing the student to I.J. Good's Bayes/NeymanPearson compromise as represented by Good's standardized pvalues. Standardized pvalues are a useful and practical tool for the evidentialist interpretation of data within the context of NeymanPearson hypothesis testing, something