Results 1  10
of
42
Monte Carlo Statistical Methods
, 1998
"... This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. ..."
Abstract

Cited by 900 (23 self)
 Add to MetaCart
This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. 1983). 5.5.5 ] PROBLEMS 211
Ancillaries and Third Order Significance
, 1995
"... this paper we consider a general construction of an approximate ancillary statistic and the subsequent derivation of significance probabilities having third order accuracy 2 for scalar or vector parameters. We use two types of recently constructed approximating models; tangent exponential models (S ..."
Abstract

Cited by 38 (20 self)
 Add to MetaCart
this paper we consider a general construction of an approximate ancillary statistic and the subsequent derivation of significance probabilities having third order accuracy 2 for scalar or vector parameters. We use two types of recently constructed approximating models; tangent exponential models (Section 2) and tangent location models (Section 5). The tangent location model provides a preliminary reduction by conditioning, and the tangent exponential model then gives significance for scalar parameters. A general formula for calculating third order significance for a scalar parameter ,
Tail probabilities from observed likelihood
 Biometrika
, 1990
"... An exponential model not in standard form is fully characterized by an observed likelihood function and its first sample space derivative, uptooneone transformations of the observable variable. This property is used to modify the Lugannani and Rice (1980) tail probability approximation to make itpa ..."
Abstract

Cited by 29 (15 self)
 Add to MetaCart
An exponential model not in standard form is fully characterized by an observed likelihood function and its first sample space derivative, uptooneone transformations of the observable variable. This property is used to modify the Lugannani and Rice (1980) tail probability approximation to make itparameterization invariant. Then, for general continuous models a version of tangent exponential model is defined, and used to derive a general tail probability approximation that uses only the observed likelihood and its first samplespace derivative. The analysis extends from density functions to distribution functions the tangent exponential model methods in Fraser (1988). Arelated tail probability approximation has been reported (BarndorffNielsen, 1988b) in the discussion to Reid (1988).
2007): “Understanding Bias in Nonlinear Panel Models: Some Recent Developments
 Advances in Economics and Econometrics, Ninth World Congress
"... The purpose of this paper is to review recently developed biasadjusted methods of estimation of nonlinear panel data models with fixed effects. For some models, like static linear and logit regressions, there exist fixedT consistent estimators as n →∞. Fixed T consistency is a desirable property b ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
The purpose of this paper is to review recently developed biasadjusted methods of estimation of nonlinear panel data models with fixed effects. For some models, like static linear and logit regressions, there exist fixedT consistent estimators as n →∞. Fixed T consistency is a desirable property because for many panels T is much smaller than n.
Regression Analysis, Nonlinear or Nonnormal: Simple and accurate pvalues from Likelihood Analysis
 J. Amer. Statist. Assoc
, 1999
"... We develop simple approximations for the pvalues to use with regression models having linear or nonlinear parameter structure and normal or nonnormal error distribution; computer iteration then gives confidence intervals. Both frequentist and Bayesian versions are given. The approximations are deri ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
We develop simple approximations for the pvalues to use with regression models having linear or nonlinear parameter structure and normal or nonnormal error distribution; computer iteration then gives confidence intervals. Both frequentist and Bayesian versions are given. The approximations are derived from recent developments in likelihood analysis and have third order accuracy. Also for very small and medium sized samples the accuracy can typically be high. The likelihood basis of the procedure seems to provide the grounds for this general accuracy. Examples are discussed and simulations record the distributional accuracy. KEYWORDS: Asymptotics; Likelihood analysis; Nonlinear; Nonnormal; pvalues; Regression 1 1. INTRODUCTION Regression analysis is a central technique of statistical methodology, but a large part of the theory is organized around the special case with linear location and normal error. This case of course corresponds to mathematical simplicities and has a long histo...
Asymptotics and the theory of inference
, 2003
"... Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this infor ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this information may be extracted. The development of likelihood inference over the past twentysome years provides an illustration of the interplay between techniques of approximation and statistical theory.
Exact distribution of edgepreserving MAP estimators for linear signal models with Gaussian measurement noise
 IEEE Transactions on image processing
, 2000
"... Abstract—We derive the exact statistical distribution of maximum a posteriori (MAP) estimators having edgepreserving nonGaussian priors. Such estimators have been widely advocated for image restoration and reconstruction problems. Previous investigations of these image recovery methods have been p ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract—We derive the exact statistical distribution of maximum a posteriori (MAP) estimators having edgepreserving nonGaussian priors. Such estimators have been widely advocated for image restoration and reconstruction problems. Previous investigations of these image recovery methods have been primarily empirical; the distribution we derive enables theoretical analysis. The signal model is linear with Gaussian measurement noise. We assume that the energy function of the prior distribution is chosen to ensure a unimodal posterior distribution (for which convexity of the energy function is sufficient), and that the energy function satisfies a uniform Lipschitz regularity condition. The regularity conditions are sufficiently general to encompass popular priors such as the generalized Gaussian Markov random field prior and the Huber prior, even though those priors are not everywhere twice continuously differentiable. Index Terms—Bayesian methods, image reconstruction, image restoration. I.
Normed likelihood as saddlepoint approximation
 J. Mult. Anal
, 1988
"... BarndorffNielsen’s formula (normed likelihood with constantinformation metric) has been proffered as an approximate conditional distribution for the maximumlikelihood estimate, based on likelihood functions. Asymptotic justifications are available and the formula coincides with the saddlepoint app ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
BarndorffNielsen’s formula (normed likelihood with constantinformation metric) has been proffered as an approximate conditional distribution for the maximumlikelihood estimate, based on likelihood functions. Asymptotic justifications are available and the formula coincides with the saddlepoint approximation in full exponential models. It is shown that the formula has wider application than is presently indicated, that in local analysis it corresponds to Laplace’s method of integration, and that it corresponds more generally to a saddlepoint approximation. AMS 1980 subject classifications: Keywords and phrases: BarndorffNielsen formula, saddlepoint approximation, normed likelihood, constant information, conditional inference.21.
Improved confidence intervals for the difference between binomial proportions based on paired data
 Statistics in Medicine 17
, 1998
"... Existing methods for setting confidence intervals for the difference � between binomial proportions based on paired data perform inadequately. The asymptotic method can produce limits outside the range of validity. The ‘exact ’ conditional method can yield an interval which is effectively only ones ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Existing methods for setting confidence intervals for the difference � between binomial proportions based on paired data perform inadequately. The asymptotic method can produce limits outside the range of validity. The ‘exact ’ conditional method can yield an interval which is effectively only onesided. Both these methods also have poor coverage properties. Better methods are described, based on the profile likelihood obtained by conditionally maximizing the proportion of discordant pairs. A refinement (methods 5 and 6) which aligns 1! � with an aggregate of tail areas produces appropriate coverage properties. A computationally simpler method based on the score interval for the single proportion also performs well (method 10). � 1998 John Wiley & Sons, Ltd. 1.
The Estimating Function Bootstrap
 SUBMITTED. FISHER LECTURE OF THE 1999 JOINT STATISTICAL MEETING
, 1999
"... The authors propose a bootstrap procedure which estimates the distribution of an estimating function by resampling its terms using bootstrap techniques. Studentized versions of this socalled estimating function (EF) bootstrap yield methods which are invariant under reparametrizations. This approach ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The authors propose a bootstrap procedure which estimates the distribution of an estimating function by resampling its terms using bootstrap techniques. Studentized versions of this socalled estimating function (EF) bootstrap yield methods which are invariant under reparametrizations. This approach often has substantial advantage, both in computation and accuracy, over more traditional bootstrap methods and it applies to a wide class of practical problems where the data are independent but not necessarily identically distributed. The methods allow for simultaneous estimation of vector parameters and their components. The authors use simulations to compare the EF bootstrap with competing methods in several examples including the common means problem and nonlinear regression. They also prove symptotic results showing that the studentized EF bootstrap yields higher order approximations for the whole vector parameter in a wide class of problems.