Results 1  10
of
23
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1176 (71 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is onehalf. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology.
The variable selection problem
 Journal of the American Statistical Association
, 2000
"... The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
The problem of variable selection is one of the most pervasive model selection problems in statistical applications. Often referred to as the problem of subset selection, it arises when one wants to model the relationship between a variable of interest and a subset of potential explanatory variables or predictors, but there is uncertainty about which subset to use. This vignette reviews some of the key developments which have led to the wide variety of approaches for this problem. 1
Performance Prediction for Exponential Language Models
"... We investigate the task of performance prediction for language models belonging to the exponential family. First, we attempt to empirically discover a formula for predicting test set crossentropy for ngram language models. We build models over varying domains, data set sizes, and ngram orders, an ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
We investigate the task of performance prediction for language models belonging to the exponential family. First, we attempt to empirically discover a formula for predicting test set crossentropy for ngram language models. We build models over varying domains, data set sizes, and ngram orders, and perform linear regression to see whether we can model test set performance as a simple function of training set performance and various model statistics. Remarkably, we find a simple relationship that predicts test set performance with a correlation of 0.9997. We analyze why this relationship holds and show that it holds for other exponential language models as well, including classbased models and minimum discrimination information models. Finally, we discuss how this relationship can be applied to improve language model performance. 1
Factorized asymptotic bayesian inference for mixture models
 In AISTATS
, 2012
"... This paper proposes a novel Bayesian approximation inference method for mixture modeling. Our key idea is to factorize marginal loglikelihood using a variational distribution over latent variables. An asymptotic approximation, a factorized information criterion (FIC), is obtained by applying the La ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper proposes a novel Bayesian approximation inference method for mixture modeling. Our key idea is to factorize marginal loglikelihood using a variational distribution over latent variables. An asymptotic approximation, a factorized information criterion (FIC), is obtained by applying the Laplace method to each of the factorized components. In order to evaluate FIC, we propose factorized asymptotic Bayesian inference (FAB), which maximizes an asymptoticallyconsistent lower bound of FIC. FIC and FAB have several desirable properties: 1) asymptotic consistency with the marginal loglikelihood, 2) automatic component selection on the basis of an intrinsic shrinkage mechanism, and 3) parameter identifiability in mixture modeling. Experimental results show that FAB outperforms stateoftheart VB methods. 1
Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection
, 2000
"... Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have previously gone unnoticed and clarifying misconceptions which seem to have propagated in the applied literature. Our tour begins with Schwarz's approximation of Bayesian integrals via Laplace's method. We then introduce the concepts underlying Rissanen 's minimum description length principle via a Bayesian scenario with a known prior; this provides the groundwork for understanding his more complex nonBayesian MDL which employs a "universal" encoding of the integers. Rissanen's method of parameter truncation is contrasted with that employed in various versions of Wallace's minimum message length criteria.
A Bayes factor with reasonable model selection consistency for ANOVA model
, 906
"... Abstract: For the balanced ANOVA setup, we propose a new closed form Bayes factor without integral representation, which is however based on fully Bayes method, with reasonable model selection consistency for two asymptotic situations (either number of levels of the factor or number of replication i ..."
Abstract
 Add to MetaCart
Abstract: For the balanced ANOVA setup, we propose a new closed form Bayes factor without integral representation, which is however based on fully Bayes method, with reasonable model selection consistency for two asymptotic situations (either number of levels of the factor or number of replication in each level goes to infinity). Exact analytical calculation of the marginal density under a special choice of the priors enables such a Bayes factor.
unknown title
"... Mycobacterium bovis shedding patterns from experimentally infected calves and the effect of concurrent infection with bovine viral diarrhoea virus ..."
Abstract
 Add to MetaCart
(Show Context)
Mycobacterium bovis shedding patterns from experimentally infected calves and the effect of concurrent infection with bovine viral diarrhoea virus
UC405 (Ml INTERPRETABLE PROJECTION PURSUIT*
, 1989
"... The goal of this thesis is to modify projection pursuit by trading accuracy for interpretability. The modification produces a more parsimonious and understandable model without sacrificing the structure which projection pursuit seeks. The method retains the nonlinear versatility of projection pursui ..."
Abstract
 Add to MetaCart
The goal of this thesis is to modify projection pursuit by trading accuracy for interpretability. The modification produces a more parsimonious and understandable model without sacrificing the structure which projection pursuit seeks. The method retains the nonlinear versatility of projection pursuit while clarifying the results. Following an introduction which outlines the dissertation, the first and second chapters contain the technique as applied to exploratory projection pursuit and projection pursuit regression respectively. The interpretability of a description is measured as the simplicity of the coefficients which define its linear projections. Several interpretability indices for a set of vectors are defined based on the ideas of rotation in factor analysis and entropy. The two methods require slightly different indices due to their contrary goals. A roughness penalty weighting approach is used to search for a more parsimonious
ISSN 09247815THE DIVIDEND AND SHARE REPURCHASE POLICIES OF CANADIAN FIRMS: EMPIRICAL EVIDENCE BASED ON A NEW RESEARCH DESIGN
, 2000
"... Ronald van Dijk is a senior research analyst at ING Investment Management in The Hague. ..."
Abstract
 Add to MetaCart
Ronald van Dijk is a senior research analyst at ING Investment Management in The Hague.