Results 1  10
of
79
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 981 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 89 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Decision Theory in Expert Systems and Artificial Intelligence
 International Journal of Approximate Reasoning
, 1988
"... Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision ..."
Abstract

Cited by 89 (18 self)
 Add to MetaCart
Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decisiontheoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decisiontheoretic representation and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expertsystem paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expertsystems sett...
Inductive and Bayesian learning in medical diagnosis
 Applied Artificial Intelligence
, 1993
"... Abstract. Although successful in medical diagnostic problems, inductive learning systems were not widely accepted in medical practice. In this paper two di erent approaches to machine learning in medical applications are compared: the system for inductive learning of decision trees Assistant, and t ..."
Abstract

Cited by 65 (11 self)
 Add to MetaCart
Abstract. Although successful in medical diagnostic problems, inductive learning systems were not widely accepted in medical practice. In this paper two di erent approaches to machine learning in medical applications are compared: the system for inductive learning of decision trees Assistant, and the naive Bayesian classi er. Both methodologies were tested in four medical diagnostic problems: localization of primary tumor, prognostics of recurrence of breast cancer, diagnosis of thyroid diseases, and rheumatology. The accuracy of automatically acquired diagnostic knowledge from stored data records is compared and the interpretation of the knowledge and the explanation ability of the classi cation process of each system is discussed. Surprisingly, thenaiveBayesian classi er is superior to Assistant in classi cation accuracy and explanation ability, while the interpretation of the acquired knowledge seems to be equally valuable. In addition, two extensions to naive Bayesian classi er are brie y described: dealing with continuous attributes, and discovering the dependencies among attributes.
A Theory of Term Weighting Based on Exploratory Data Analysis
 Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
, 1998
"... Techniques of exploratory data analysis are used to study the weight of evidence that the occurrence of a query term provides in support of the hypothesis that a document is relevant to an information need. In particular, the relationship between the document frequency and the weight of evidence is ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Techniques of exploratory data analysis are used to study the weight of evidence that the occurrence of a query term provides in support of the hypothesis that a document is relevant to an information need. In particular, the relationship between the document frequency and the weight of evidence is investigated. A correlation between document frequency normalized by collection size and the mutual information between relevance and term occurrence is uncovered. This correlation is found to be robust across a variety of query sets and document collections. Based on this relationship, a theoretical explanation of the efficacy of inverse document frequency for term weighting is developed which differs in both style and content from theories previously put forth. The theory predicts that a "flattening" of idf at both low and high frequency should result in improved retrieval performance. This altered idf formulation is tested on all TREC query sets. Retrieval results corroborate the predicti...
Toward evidencebased medical statistics. 2: The bayes factor
 Annals of Internal Medicine
, 1999
"... Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perc ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perceive as a subjective approach to data analysis. It is little understood that Bayesian methods have a databased core, which can be used as a calculus of evidence. This core is the Bayes factor, which in its simplest form is also called a likelihood ratio. The minimum Bayes factor is objective and can be used in lieu of the P value as a measure of the evidential strength. Unlike P values, Bayes factors have a sound theoretical foundation and an interpretation that allows their use in both inference and decision making. Bayes factors show that P values greatly overstate the evidence against the null hypothesis. Most important, Bayes factors require the addition of background knowledge to be transformed into inferences—probabilities that a given conclusion is right or wrong. They make the distinction clear between experimental evidence and inferential conclusions while providing a framework in which to combine prior with current evidence. This paper is also available at
Bayesian hypothesis testing: A reference approach
 Internat. Statist. Rev
, 2002
"... For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0), ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0), or not to use (a1), the simpler probability model (or null model) M0 ≡{p(x  θ0, ω), ω ∈ Ω}, where the loss difference L(a0, θ, ω) − L(a1, θ, ω) is proportional to the amount of information δ(θ0, θ, ω) which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ∫ ∫ δ(θ0, θ, ω) π(θ, ω  x) dθ dω is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(θ0, x) = ∫ δπ(δ  x) dδ, of the amount of information δ(θ0, θ, ω) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is
Information, relevance, and social decisionmaking: Some principles and results of decisiontheoretic semantics
 Logic, Language, and Computation
, 1999
"... Abstract. I propose to treat natural language semantics as a branch of pragmatics, identified in the way of C.S. Peirce, F.P. Ramsey, and R. Carnap as decisiontheory. The notion of relevance plays a key role. It is explicated traditionally, distinguished from a recent homophone, and applied in its ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. I propose to treat natural language semantics as a branch of pragmatics, identified in the way of C.S. Peirce, F.P. Ramsey, and R. Carnap as decisiontheory. The notion of relevance plays a key role. It is explicated traditionally, distinguished from a recent homophone, and applied in its natural framework of issuebased communication. Empirical emphasis is on implicature and presupposition. Several theorems are stated and made use of. Items analyzed include ‘or’, ‘not’, ‘but’, ‘even’, and ‘also’. I conclude on parts of mind. This paper submits an approach to meaning, with a focus on broadly nontruthconditional aspects of natural language. Semantics is treated as a branch of pragmatics, identified as decisiontheory in the way of C.S. Peirce, F.P. Ramsey, and of Rudolf Carnap in his later work. A key theoretical notion, distinguishable from, but intelligibly related to, information is the positive or negative relevance of a proposition or sentence to another. It is explicated in the probabilistic way familiar from Carnap and traditional in the philosophies of science and rational action. This makes it a representation of local epistemic contextchange potential that is directional in a precisely specifiable sense and naturally related to utterers ’ instrumental intentions. Relevance so defined is proposed as an explicans for Oswald Ducrot’s insightful ‘valeur argumentative’. In view of possible confusion among some students of language, it is contrasted with a more recent and idiosyncratic pretender to the appellation, due to Dan Sperber and Deirdre Wilson. The latter proposal turns out, at best, to paraphrase H.P. Grice’s nondirectional concepts of ‘informativeness ’ and ‘perspicuity’. (More informative designations are suggested for it, and for the eponymous linguistic doctrine emanating from parts of CNRS Paris and of UC London.)
The Promise of Bayesian Inference for Astrophysics
, 1992
"... . The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems. The ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems. The two approaches address the same questions with very different calculations, but in simple cases often give the same final results, confusing the issue of whether one is superior to the other. Here frequentist and Bayesian methods are applied to problems where such a mathematical coincidence does not occur, allowing assessment of their relative merits based on their performance, rather than on philosophical argument. Emphasis is placed on a key distinction between the two approaches: Bayesian methods, based on comparisons among alternative hypotheses using the single observed data set, consider averages over hypotheses; frequentist methods, in contrast, average over hypothetical alternative...