Results 1  10
of
17
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity
 Bayesian Statistics
, 2007
"... The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison a ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
The integrated likelihood (also called the marginal likelihood or the normalizing constant) is a central quantity in Bayesian model selection and model averaging. It is defined as the integral over the parameter space of the likelihood times the prior density. The Bayes factor for model comparison and Bayesian testing is a ratio of integrated likelihoods, and the model weights in Bayesian model averaging are proportional to the integrated likelihoods. We consider the estimation of the integrated likelihood from posterior simulation output, aiming at a generic method that uses only the likelihoods from the posterior simulation iterations. The key is the harmonic mean identity, which says that the reciprocal of the integrated likelihood is equal to the posterior harmonic mean of the likelihood. The simplest estimator based on the identity is thus the harmonic mean of the likelihoods. While this is an unbiased and simulationconsistent estimator, its reciprocal can have infinite variance and so it is unstable in general. We describe two methods for stabilizing the harmonic mean estimator. In the first one, the parameter space is reduced in such a way that the modified estimator involves a harmonic mean of heaviertailed densities, thus resulting in a finite variance estimator. The resulting
Improved likelihood inference for discrete data
 J. R. Statist. Soc. B
, 2006
"... Summary. Discrete data, particularly count and contingency table data, are typically analyzed using methods that are accurate to first order, such as normal approximations for maximum likelihood estimators. By contrast continuous data can quite generally be analyzed using third order procedures, wit ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Summary. Discrete data, particularly count and contingency table data, are typically analyzed using methods that are accurate to first order, such as normal approximations for maximum likelihood estimators. By contrast continuous data can quite generally be analyzed using third order procedures, with major improvements in accuracy and with intrinsic separation of information concerning parameter components. This paper extends these higher order results to discrete data, yielding a methodology that is widely applicable and accurate to second order. The extension can be described in terms of an approximating exponential model expressed in terms of a score variable. The development is outlined and the flexibility of the approach illustrated by examples. 1.
Likelihood Inference in the Presence of Nuisance Parameters
"... We describe some recent approaches to likelihood based inference in the presence of nuisance parameters. Our approach is based on plotting the likelihood function and the pvalue function, using recently developed third order approximations. Orthogonal parameters and adjustments to profile likelihoo ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
We describe some recent approaches to likelihood based inference in the presence of nuisance parameters. Our approach is based on plotting the likelihood function and the pvalue function, using recently developed third order approximations. Orthogonal parameters and adjustments to profile likelihood are also discussed. Connections to classical approaches of conditional and marginal inference are outlined. 1.
Default priors for Bayesian and frequentist inference
 J. Royal Statist. Soc. B
, 2010
"... We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We investigate the choice of default prior for use with likelihood to facilitate Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function leading to the elimination of parameters not of interest and accordingly providing a density type assessment for a parameter of interest. For regular models with independent coordinates we develop a secondorder prior for the full parameter based on an approximate location relation from near a parameter value to near the observed data point; this derives directly from the coordinate distribution functions and is closely linked to the original Bayes approach. We then develop a modified prior that is targetted on a component parameter of interest and avoids the marginalization paradoxes of Dawid, Stone and Zidek (1973); this uses some extensions of WelchPeers theory that modify the Jeffreys prior and builds more generally on the approximate location property. A third type of prior is then developed that targets a vector interest parameter in the presence of a vector nuisance parameter and is based more directly on the original Jeffreys approach. Examples are given to clarify the computation of the priors and the flexibility of the approach.
Accurate Parametric Inference for Small Samples
, 2008
"... We outline how modern likelihood theory, which provides essentially exact inferences in a variety of parametric statistical problems, may routinely be applied in practice. Although the likelihood procedures are based on analytical asymptotic approximations, the focus of this paper is not on theory ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We outline how modern likelihood theory, which provides essentially exact inferences in a variety of parametric statistical problems, may routinely be applied in practice. Although the likelihood procedures are based on analytical asymptotic approximations, the focus of this paper is not on theory but on implementation and applications. Numerical illustrations are given for logistic regression, nonlinear models, and linear nonnormal models, and we describe a sampling approach for the third of these classes. In the case of logistic regression, we argue that approximations are often more appropriate than ‘exact’ procedures, even when these exist.
Generalized inferential models
, 2011
"... This paper generalizes the authors ’ inferential model (IM) framework for priorfree, posterior probabilistic inference about unknown parameters. This generalization is accomplished by focusing on an association model determined by the sampling distribution of a function of the data and parameter. Th ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
This paper generalizes the authors ’ inferential model (IM) framework for priorfree, posterior probabilistic inference about unknown parameters. This generalization is accomplished by focusing on an association model determined by the sampling distribution of a function of the data and parameter. The advantage is that the new association model is generally easier to work with than that determined by the full sampling distribution of the data, and that the generalized IM retains the desirable frequencycalibration property of the basic IM. An important special case is when this function of data and parameters is the likelihood. Illustrative examples and further properties of this likelihoodbased generalized IM are given, including extensions to handle marginal and conditional inference. The strengths of the proposed approach are showcased in two interesting marginal inference problems: the gamma mean model and a Gaussian variance components model.
Ancillary statistics: A review
 Statistica Sinica
, 2010
"... Ancillary statistics, one of R. A. Fisher’s most fundamental contributions to statistical inference, are statistics whose distributions do not depend on the model parameters. However, in conjunction with some other statistics, typically the maximum likelihood estimate, they provide valuable informat ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Ancillary statistics, one of R. A. Fisher’s most fundamental contributions to statistical inference, are statistics whose distributions do not depend on the model parameters. However, in conjunction with some other statistics, typically the maximum likelihood estimate, they provide valuable information about the parameters of interest. The present article is a review of some of the uses and limitations of ancillary statistics. Due to the vastness of the subject, the present account is, by no means, comprehensive. The topics selected reflect our interest, and clearly many important contributions to the subject are left out. We touch upon both exact and asymptotic inference based on ancillary statistics. The discussion includes BarndorffNielsen’s p ∗ formula, the role of ancillary statistics in the elimination of nuisance parameters, and in finding optimal estimating functions. We also discuss some approximate ancillary statistics, Bayesian ancillarity and the ancillarity paradox.
SUMMARY ASSESSING A VECTOR PARAMETER
"... The assessment of a vector parameter is central to statistical theory. The analysis of variance with tests and confidence regions for treatment effects is well established and the related distribution theory is conveniently quite straightforward, particularly in the normal error case. In more genera ..."
Abstract
 Add to MetaCart
The assessment of a vector parameter is central to statistical theory. The analysis of variance with tests and confidence regions for treatment effects is well established and the related distribution theory is conveniently quite straightforward, particularly in the normal error case. In more general contexts such as generalized linear models, the assessment is usually
1 2 3 4 5 6 7 8
"... Higher order approximations to pvalues can be obtained from the loglikelihood function and a reparameterization that can be viewed as a canonical parameter in an exponential family approximation to the model. This approach clarifies the connection between Skovgaard (1996) and Fraser et al. (1999a) ..."
Abstract
 Add to MetaCart
Higher order approximations to pvalues can be obtained from the loglikelihood function and a reparameterization that can be viewed as a canonical parameter in an exponential family approximation to the model. This approach clarifies the connection between Skovgaard (1996) and Fraser et al. (1999a), and shows that the Skovgaard approximation can be obtained directly using the mean loglikelihood function. Some key words: approximate pivot; Fraser information; Kullback–Leibler distance; p ∗ approximation; tangent exponential model 1.
On the uniqueness of probability matching priors
"... Probability matching priors are priors for which Bayesian and frequentist inference, in the form of posterior quantiles, or confidence intervals, agree to some order of approximation. These priors are constructed by solving a first order partial differential equation, that may be difficult to solve. ..."
Abstract
 Add to MetaCart
Probability matching priors are priors for which Bayesian and frequentist inference, in the form of posterior quantiles, or confidence intervals, agree to some order of approximation. These priors are constructed by solving a first order partial differential equation, that may be difficult to solve. However, Peers (1965) and Tibshirani (1989) showed that under parameter orthogonality a family of matching priors can be obtained. The present work shows that, when used in a third order approximation to the posterior marginal density, the PeersTibshirani class of matching priors is essentially unique. 1 Some key words: approximate Bayesian inference; Laplace approximation; orthogonal parameters; tail probability approximation. 1