Results 1  10
of
26
Belief Functions: The Disjunctive Rule of Combination and the Generalized Bayesian Theorem
"... We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination ( ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
We generalize the Bayes ’ theorem within the transferable belief model framework. The Generalized Bayesian Theorem (GBT) allows us to compute the belief over a space Θ givenanobservationx⊆Xwhen one knows only the beliefs over X for every θi ∈ Θ. We also discuss the Disjunctive Rule of Combination (DRC) for distinct pieces of evidence. This rule allows us to compute the belief over X from the beliefs induced by two distinct pieces of evidence when one knows only that one of the pieces of evidence holds. The properties of the DRC and GBT and their uses for belief propagation in directed belief networks are analysed. The use of the discounting factors is justfied. The application of these rules is illustrated by an example of medical diagnosis.
Toward evidencebased medical statistics. 2: The bayes factor
 Annals of Internal Medicine
, 1999
"... Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perc ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Bayesian inference is usually presented as a method for determining how scientific belief should be modified by data. Although Bayesian methodology has been one of the most active areas of statistical development in the past 20 years, medical researchers have been reluctant to embrace what they perceive as a subjective approach to data analysis. It is little understood that Bayesian methods have a databased core, which can be used as a calculus of evidence. This core is the Bayes factor, which in its simplest form is also called a likelihood ratio. The minimum Bayes factor is objective and can be used in lieu of the P value as a measure of the evidential strength. Unlike P values, Bayes factors have a sound theoretical foundation and an interpretation that allows their use in both inference and decision making. Bayes factors show that P values greatly overstate the evidence against the null hypothesis. Most important, Bayes factors require the addition of background knowledge to be transformed into inferences—probabilities that a given conclusion is right or wrong. They make the distinction clear between experimental evidence and inferential conclusions while providing a framework in which to combine prior with current evidence. This paper is also available at
Bayesian Analysis For Simulation Input And Output
, 1997
"... The paper summarizes some important results at the intersection of the fields of Bayesian statistics and stochastic simulation. Two statistical analysis issues for stochastic simulation are discussed in further detail from a Bayesian perspective. First, a review of recent work in input distribution ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
The paper summarizes some important results at the intersection of the fields of Bayesian statistics and stochastic simulation. Two statistical analysis issues for stochastic simulation are discussed in further detail from a Bayesian perspective. First, a review of recent work in input distribution selection is presented. Then, a new Bayesian formulation for the problem of output analysis for a single system is presented. A key feature is analyzing simulation output as a random variable whose parameters are an unknown function of the simulation's inputs. The distribution of those parameters is inferred from simulation output via Bayesian responsesurface methods. A brief summary of Bayesian inference and decision making is included for reference.
The Design Argument
, 2004
"... The design argument is one of three main arguments for the existence of God; the others are the ontological argument and the cosmological argument. Unlike the ontological argument, the design argument and the cosmological argument are a posteriori. And whereas the cosmological argument could focus o ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The design argument is one of three main arguments for the existence of God; the others are the ontological argument and the cosmological argument. Unlike the ontological argument, the design argument and the cosmological argument are a posteriori. And whereas the cosmological argument could focus on any present event to get the ball rolling (arguing that it must trace back to a first cause, namely God), design theorists are usually more selective. Design arguments have typically been of two types – organismic and cosmic. Organismic design arguments start with the observation that organisms have features that adapt them to the environments in which they live and that exhibit a kind of delicacy. Consider, for example, the vertebrate eye. This organ helps organisms survive by permitting them to perceive objects in their environment. And were the parts of the eye even slightly different in their shape and assembly, the resulting organ would not allow us to see. Cosmic design arguments begin with an observation concerning features of the entire cosmos – the universe obeys simple laws, it has a kind of stability, its physical features permit life and intelligent life to exist. However, not all design arguments fit into these two neat compartments. Kepler, for example, thought that the face we see when we look at the moon requires explanation in terms of intelligent design. Still, the common thread is that design theorists
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Upper Probabilities Based Only on the Likelihood Function
 Journal of the Royal Statistical Society, Series B
, 1997
"... In the problem of parametric statistical inference with a finite parameter space, we study some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior probabilities. The rules satisfy the likelihood principle and a ba ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In the problem of parametric statistical inference with a finite parameter space, we study some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior probabilities. The rules satisfy the likelihood principle and a basic consistency principle ("avoiding sure loss"), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile two general approaches to statistical inference: likelihood inference and coherent inference. Keywords. Coherence, foundations of statistics, imprecise probabilities, likelihood function, likelihoo...
Finding the Maximum Likelihood Tree is Hard
 J. ACM
, 2005
"... Maximum likelihood (ML) is an increasingly popular optimality criterion for selecting evolutionary trees (Felsenstein, 1981). Finding optimal ML trees appears to be a very hard computational task, but for tractable cases, ML is the method of choice. In particular, algorithms and heuristics for ML ta ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Maximum likelihood (ML) is an increasingly popular optimality criterion for selecting evolutionary trees (Felsenstein, 1981). Finding optimal ML trees appears to be a very hard computational task, but for tractable cases, ML is the method of choice. In particular, algorithms and heuristics for ML take longer to run than algorithms and heuristics for the second major character based criterion, maximum parsimony (MP). However, while MP has been known to be NPcomplete for over 20 years (Graham and Foulds, 1982; Day, Johnson, and Sankoff, 1986), such a hardness result for ML has so far eluded researchers in the field. An important work by Tuffley and Steel (1997) proves quantitative relations between the parsimony values of given sequences and the corresponding log likelihood values. However, a direct application of their work would only give an exponential time reduction from MP to ML. Another step in this direction has recently been made by AddarioBerry et al. (2004), who proved that ancestral maximum likelihood (AML) is NPcomplete. AML “lies in between” the two problems, having some properties of MP and some properties of ML. Still, the AML proof is not directly applicable to the ML problem. We resolve the question, showing that “regular ” ML on phylogenetic trees is indeed intractable. Our reduction follows the vertex cover reductions for MP (Day et al.) and AML (AddarioBerry et al.), but its starting point is an approximation version of vertex cover, known as gap vc. The crux of our work is not the reduction, but its correctness proof. The proof goes through a series of tree modifications, while controlling the likelihood losses at each step, using the bounds of Tuffley and Steel. The proof can be viewed as correlating the value of any ML solution to an arbitrarily close approximation to vertex cover.
The Contest Between Parsimony and Likelihood
"... In a “classic ” phylogenetic inference problem, the observed taxa are assumed to be the leaves of a bifurcating tree and the goal is to infer just the “topology ” of the tree (i.e., the formal tree structure linking the extant taxa at the tips), not amount of time between branching events, or amount ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In a “classic ” phylogenetic inference problem, the observed taxa are assumed to be the leaves of a bifurcating tree and the goal is to infer just the “topology ” of the tree (i.e., the formal tree structure linking the extant taxa at the tips), not amount of time between branching events, or amount of evolution that has taken place on branches, or character states of interior vertices. Two of the main methods that biologists now use to solve such problems are maximum likelihood (ML) and maximum parsimony (MP); distance methods constitute a third approach, which will not be discussed here. ML seeks to find the tree topology that confers the highest probability on the observed characteristics of tip species. MP seeks to find the tree topology that requires the fewest changes in character state to produce the characteristics of those tip species. Besides saying what the “best ” tree is for a given data set, both methods also provide an ordering of trees, from best to worst. The two methods sometimes disagree about this ordering—most vividly, when they disagree about which tree is best supported by the evidence. For this reason, biologists have had to address this methodological dispute head on, rather than setting it aside as a merely “philosophical ” dispute of dubious relevance to scientists “in the trenches.” The main objection that has been made against ML is that it requires the adoption of a model of the evolutionary process that one has scant reason to think is true. ML requires a process model because hypotheses that specify a tree topology (and nothing more) do not, by themselves, confer probabilities on the observations. The situation here is familiar to philosophers as an instance of “Duhem’s Thesis. ” Pierre Duhem was a French philosopher of science who contended that physical theories do not entail 1 claims about observations unless they are supplemented with auxiliary assumptions (Duhem,