Results 1  10
of
76
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1012 (70 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
Bayes factors and model uncertainty
 DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON
, 1993
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 90 (6 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of Pvalues, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are: from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory; Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis; Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis; Bayes factors are very general, and do not require alternative models to be nested; several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods; in "nonstandard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive nonBayesian significance
Learning Bayesian Belief Networks Based on the Minimum Description Length Principle: Basic Properties
, 1996
"... This paper was partially presented at the 9th conference on Uncertainty in Artificial Intelligence, July 1993. ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
This paper was partially presented at the 9th conference on Uncertainty in Artificial Intelligence, July 1993.
Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models
 Journal of Econometrics
, 2001
"... Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models
The Impact of Bootstrap Methods on Time Series Analysis
 Statistical Science
, 2003
"... Sparked by Efron’s seminal paper, the decade of the 1980s was a period of active research on bootstrap methods for independent data— mainly i.i.d. or regression setups. By contrast, in the 1990s much research was directed towards resampling dependent data, for example, time series and random field ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Sparked by Efron’s seminal paper, the decade of the 1980s was a period of active research on bootstrap methods for independent data— mainly i.i.d. or regression setups. By contrast, in the 1990s much research was directed towards resampling dependent data, for example, time series and random fields. Consequently, the availability of valid nonparametric inference procedures based on resampling and/or subsampling has freed practitioners from the necessity of resorting to simplifying assumptions such as normality or linearity that may be misleading.
A Discussion of Parameter and Model Uncertainty in Insurance
 in Insurance,” Insurance: Mathematics and Economics
, 2000
"... In this paper we consider the process of modelling uncertainty. In particular we are concerned with making inferences about some quantity of interest which, at present, has been unobserved. Examples of such a quantity include the probability of ruin of a surplus process, the accumulation of an inves ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
In this paper we consider the process of modelling uncertainty. In particular we are concerned with making inferences about some quantity of interest which, at present, has been unobserved. Examples of such a quantity include the probability of ruin of a surplus process, the accumulation of an investment, the level or surplus or deficit in a pension fund and the future volume of new business in an insurance company. Uncertainty in this quantity of interest, y, arises from three sources: . uncertainty due to the stochastic nature of a given model; . uncertainty in the values of the parameters in a given model; . uncertainty in the model underlying what we are able to observe and determining the quantity of interest. It is common in actuarial science to find that the first source of uncertainty is the only one which receives rigorous attention. A limited amount of research in recent years has considered the effect of parameter uncertainty, while there is still considerable scope ...
The estimation of the model order in exponential families
 IEEE Trans. Inform. Theory
, 1989
"... AbstractThe estimation of the model order in exponential families is studied. Estimators are sought that achieve high exponential rate of decrease in the underestimation probability while keeping the overestimation probability exponent at a certain prescribed level. It is assumed that a ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
AbstractThe estimation of the model order in exponential families is studied. Estimators are sought that achieve high exponential rate of decrease in the underestimation probability while keeping the overestimation probability exponent at a certain prescribed level. It is assumed that a
Application of change detection to dynamic contact sensing
 The International Journal of Robotics Research
, 1994
"... The forces of contact during manipulation convey substantial information about the state of the manipulation. ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The forces of contact during manipulation convey substantial information about the state of the manipulation.
Model selection and prediction: Normal regression
 Ann. Inst. Statist. Math
, 1994
"... Abstract. This paper discusses the topic of model selection for finitedimensional normal regression models. We compare model selection criteria according to prediction errors based upon prediction with refitting, and prediction without refitting. We provide a new lower bound for prediction withou ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Abstract. This paper discusses the topic of model selection for finitedimensional normal regression models. We compare model selection criteria according to prediction errors based upon prediction with refitting, and prediction without refitting. We provide a new lower bound for prediction without refitting, while a lower bound for prediction with refitting was given by Rissanen. Moreover, we specify a set of sufficient conditions for a model selection criterion to achieve these bounds. Then the achievability of the two bounds by the following selection rules are addressed: Rissanen's accumulated prediction error criterion (APE), his stochastic omplexity criterion, AIC, BIC and the FPE criteria. In particular, we provide upper bounds on overfitting and underfitting probabilities needed for the achievability. Finally, we offer a brief discussion on the issue of finitedimensional vs. infinitedimensional model assumptions. Key words and phrases: Model selection, prediction lower bound, accumulated
Adaptive estimation in autoregression or βmixing regression via model selection
 Ann. Statist
, 2001
"... We study the problem of estimating some unknown regression function in a βmixing dependent framework. To this end, we consider some collection of models which are finite dimensional spaces. A penalized leastsquares estimator (PLSE) is built on a data driven selected model among this collection. We ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
We study the problem of estimating some unknown regression function in a βmixing dependent framework. To this end, we consider some collection of models which are finite dimensional spaces. A penalized leastsquares estimator (PLSE) is built on a data driven selected model among this collection. We state non asymptotic risk bounds for this PLSE and give several examples where the procedure can be applied (autoregression, regression with arithmetically βmixing design points, regression with mixing errors, estimation in additive frameworks, estimation of the order of the autoregression...). In addition we show that under a weak moment condition on the errors, our estimator is adaptive in the minimax sense simultaneously over some family of Besov balls. 1. Introduction We