Results 1 
7 of
7
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 96 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Software Reliability (Update)
, 1996
"... This article concentrates on developments since 1985. Various techniques are employed to detect and to correct the bugs in software, the quality of which is therefore expected to improve. We continue testing until the software reliability achieved is at a level according to specifications, or subjec ..."
Abstract
 Add to MetaCart
This article concentrates on developments since 1985. Various techniques are employed to detect and to correct the bugs in software, the quality of which is therefore expected to improve. We continue testing until the software reliability achieved is at a level according to specifications, or subject to time or cost constraints. Various stochastic models have been used to monitor the software quality changes due to debugging. The objectives of using these models include predicting the mean time between failures, estimating the number of residual faults, and assessing the software reliability. We roughly classify these models into two types: dynamic models and static models. Dynamic models, also called software reliability growth models, follow the changes of the software throughout the entire testing period. Most of the models employed in software reliability are dynamic models. Software reliability is defined to be the probability of failurefree operation of a computer program in a specified environment for a specified period of time. Most models assume that when a bug is detected in the software, it is immediately fixed and the time for fixing it is negligible. Perhaps it is believed that the down time provides little information about the software reliability. Dynamic models can be further divided into time domain models and counting process* models. The former are probability models for the sequence of interfailure times that are caused by faults. Many of the time domain models can be related to their dual, the counting process that counts the number of failures found in testing. Nonhomogeneous Poisson point processes* (NHPP) for modeling have been used extensively for the counting process.
Bayesian Inference for SShaped Software Reliability Growth Models
, 1996
"... This paper presents a Bayesian methodology for the OhbaYamada model. The OhbaYamada model assumes that the number of errors M(t) discovered in software testing follows a nonhomogeneous Poisson process (NHPP) with the mean value function m(t) to be a multiple of a gamma distribution function with s ..."
Abstract
 Add to MetaCart
This paper presents a Bayesian methodology for the OhbaYamada model. The OhbaYamada model assumes that the number of errors M(t) discovered in software testing follows a nonhomogeneous Poisson process (NHPP) with the mean value function m(t) to be a multiple of a gamma distribution function with shape parameter equal to 2. From now on, we will call this process NHPPgamma2. The mean value function of the NHPPgamma2 is Sshaped to reflect that it is usually difficult to find the faults in the software at the beginning of testing. After a learning period, the faults are found rapidly and then gradually slowly due to debugging. In addition to the NHPPgamma2, we also consider a more general class of NHPP with Sshaped mean value functions with an arbitrary shape parameter k
unknown title
"... The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of w ..."
Abstract
 Add to MetaCart
The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of which features (orientation, size and shape) are to be common to all clusters and which may differ between clusters. Also, it is restricted to Gaussian distributions and it does not allow for noise. We propose ways of overcoming these limitations. A reparameterization of the covariance matrix allows us to specify that some features, but not all, be the same for all clusters. A practical framework for nonGaussian clustering is outlined, and a means of incorporating noise in the form of a Poisson process is described. An approximate Bayesian method for choosing the number of clusters is given. The performance of the proposed methods is studied by simulation, with encouraging results. The methods are applied to the analysis of a data set arising in the study of diabetes, and the results seem better than those of previous analyses.
An EMBased Scheme for Record Value Statistics Models in Software Reliability Estimation
"... This paper considers an EM (expectationmaximization) based scheme for record value statistics (RVS) models in software reliability estimation. The RVS model provides one of the generalized modeling frameworks to unify several of existing software reliability models described as nonhomogeneous Pois ..."
Abstract
 Add to MetaCart
This paper considers an EM (expectationmaximization) based scheme for record value statistics (RVS) models in software reliability estimation. The RVS model provides one of the generalized modeling frameworks to unify several of existing software reliability models described as nonhomogeneous Poisson processes (NHPPs). The proposed EM algorithm gives a numerically stable procedure to compute the maximum likelihood estimates of RVS models. In particular, this paper focuses on an RVS model based on a mixture of exponential distributions. As an illustrative example, we also derive a concrete EM algorithm for the wellknown MusaOkumoto logarithmic Poisson execution time model by applying our result, and discusses the effectiveness of the EMbased scheme for RVS models with a simple numerical example.