Results 1 
7 of
7
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 990 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 98 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Change Point and Change Curve Modeling in Stochastic Processes and Spatial Statistics
 Journal of Applied Statistical Science
, 1993
"... In simple onedimensional stochastic processes it is feasible to model change points explicitly and to make inference about them. I have found that the Bayesian approach produces results more easily than nonBayesian approaches. It has the advantages of relative technical simplicity, theoretical opt ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In simple onedimensional stochastic processes it is feasible to model change points explicitly and to make inference about them. I have found that the Bayesian approach produces results more easily than nonBayesian approaches. It has the advantages of relative technical simplicity, theoretical optimality, and of allowing a formal comparison between abrupt and gradual descriptions of change. When it can be assumed that there is at most one changepoint, this is especially simple. This is illustrated in the context of Poisson point processes. A simple approximation is introduced that is applicable to a wide range of problems in which the change point model can be written as a regression or generalized linear model. When the number of change points is unknown, the Bayesian approach proceeds most naturally by statespace modeling or "hidden Markov chains". The general ideas of this are briefly reviewed, particularly the multiprocess Kalman filter. I then describe the application of these...
Bayesian Inference for NonMarkovian Point Processes
, 2010
"... Statistical inference for point processes originates, as pointed out by Daley and VereJones (2005), in two sources: life tables, and counting phenomena. Among early sources of inferential work are Graunt, Halley and Newton in the 18th century on the life table side, and Newcomb, Abbé and Seidel in ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Statistical inference for point processes originates, as pointed out by Daley and VereJones (2005), in two sources: life tables, and counting phenomena. Among early sources of inferential work are Graunt, Halley and Newton in the 18th century on the life table side, and Newcomb, Abbé and Seidel in the second half of the 19th century on the counting side (for
ARE OZONE EXCEEDANCE RATES DECREASING? COMMENT ON "EXTREME VALUE ANALYSIS OF ENVIRONwlENTAL TIMESERIES: AN APPLICATION TO TREND DETECTION IN GROUNDLEVEL OZONE " BY R.L. SMITH
, 1989
"... value analysis ofenvironmental timeseries: An application to trend detection in groundlevel ozone " by R.L. Smith ..."
Abstract
 Add to MetaCart
value analysis ofenvironmental timeseries: An application to trend detection in groundlevel ozone " by R.L. Smith
unknown title
"... The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of w ..."
Abstract
 Add to MetaCart
The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of which features (orientation, size and shape) are to be common to all clusters and which may differ between clusters. Also, it is restricted to Gaussian distributions and it does not allow for noise. We propose ways of overcoming these limitations. A reparameterization of the covariance matrix allows us to specify that some features, but not all, be the same for all clusters. A practical framework for nonGaussian clustering is outlined, and a means of incorporating noise in the form of a Poisson process is described. An approximate Bayesian method for choosing the number of clusters is given. The performance of the proposed methods is studied by simulation, with encouraging results. The methods are applied to the analysis of a data set arising in the study of diabetes, and the results seem better than those of previous analyses.