Results 1  10
of
44
Making the most of statistical analyses: Improving interpretation and presentation
 American Journal of Political Science
, 2000
"... Social scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. ..."
Abstract

Cited by 164 (18 self)
 Add to MetaCart
Social scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. In this article, we offer an approach, built on the technique of statistical simulation, to extract the currently overlooked information from any statistical method and to interpret and present it in a readerfriendly manner. Using this technique requires some expertise,
DecisionTheoretic Deliberation Scheduling for Problem Solving In . . .
 ARTIFICIAL INTELLIGENCE
, 1994
"... We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in ..."
Abstract

Cited by 157 (3 self)
 Add to MetaCart
We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in decision making affects the quality of the responses generated by a system.
Fuzzy sets and probability : Misunderstandings, bridges and gaps
 In Proceedings of the Second IEEE Conference on Fuzzy Systems
, 1993
"... This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the "probability versus fuzzy set" challenge. This problem has not a single facet, as will be claimed here. Moreover it s ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the "probability versus fuzzy set" challenge. This problem has not a single facet, as will be claimed here. Moreover it seems that a lot of controversies might have been avoided if protagonists had been patient enough to build a common language and to share their scientific backgrounds. The main points made here are as follows. i) Fuzzy set theory is a consistent body of mathematical tools. ii) Although fuzzy sets and probability measures are distinct, several bridges relating them have been proposed that should reconcile opposite points of view ; especially possibility theory stands at the crossroads between fuzzy sets and probability theory. iii) Mathematical objects that behave like fuzzy sets exist in probability theory. It does not mean that fuzziness is reducible to randomness. Indeed iv) there are ways of approaching fuzzy sets and possibility theory that owe nothing to probability theory. Interpretations of probability theory are multiple especially frequentist versus subjectivist views (Fine [31]) ; several interpretations of fuzzy sets also exist. Some interpretations of fuzzy sets are in agreement with probability calculus and some are not. The paper is structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out nonprobabilistic interpretations of fuzzy sets. The next section examines the literature on possibilityprobability transformati...
Could Fisher, Jeffreys, and Neyman Have Agreed on Testing?
, 2002
"... Ronald Fisher advocated testing using pvalues; Harold Jeffreys proposed use of objective posterior probabilities of hypotheses; and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Ronald Fisher advocated testing using pvalues; Harold Jeffreys proposed use of objective posterior probabilities of hypotheses; and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches.
Variational Bayesian Inference for fMRI time series
 NeuroImage
"... We describe a Bayesian estimation and inference procedure for fMRI time series based on the use of General Linear Models with Autoregressive (AR) error processes. We make use of the Variational Bayesian (VB) framework which approximates the true posterior density with a factorised density. The fidel ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
We describe a Bayesian estimation and inference procedure for fMRI time series based on the use of General Linear Models with Autoregressive (AR) error processes. We make use of the Variational Bayesian (VB) framework which approximates the true posterior density with a factorised density. The fidelity of this approximation is verified via Gibbs sampling. The VB approach provides a natural extension to previous Bayesian analyses which have used Empirical Bayes. VB has the advantage of taking into account the variability of hyperparameter estimates with little additional computational effort. Further, VB allows for automatic selection of the order of the AR process. Results are shown on simulated data and on data from an eventrelated fMRI experiment. 1
Probabilistic logic and probabilistic networks
, 2008
"... While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches ..."
Abstract

Cited by 17 (13 self)
 Add to MetaCart
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences. Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference
Information, Divergence and Risk for Binary Experiments
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2009
"... We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all are related to costsensitive binary classification. As well as developing relationships between generative and discriminative views of learning, the new machinery leads to tight and more general surrogate regret bounds and generalised Pinsker inequalities relating fdivergences to variational divergence. The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates Maximum Mean Discrepancy to Fisher Linear Discriminants.
When did Bayesian inference become “Bayesian"?
 BAYESIAN ANALYSIS
, 2006
"... While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesi ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesian developments, beginning with Bayes’ posthumously published 1763 paper and continuing up through approximately 1970, including the period of time when “Bayesian” emerged as the label of choice for those who advocated Bayesian methods.