Results 1  10
of
17
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 149 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Choice of Basis for Laplace Approximation
 Machine Learning
, 1998
"... Maximum a posterJori optimization of parameters and the Laplace approximation for the marginal likelihood are both basisdependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the prob ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
Maximum a posterJori optimization of parameters and the Laplace approximation for the marginal likelihood are both basisdependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the softmax' basis.
Generalized exponential distribution: Existing results and some recent developments
 Journal of Statistical Planning and Inference
, 2007
"... Mudholkar and Srivastava [25] introduced threeparameter exponentiated Weibull distribution. Twoparameter exponentiated exponential or generalized exponential distribution is a particular member of the exponentiated Weibull distribution. Generalized exponential distribution has a right skewed unim ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
Mudholkar and Srivastava [25] introduced threeparameter exponentiated Weibull distribution. Twoparameter exponentiated exponential or generalized exponential distribution is a particular member of the exponentiated Weibull distribution. Generalized exponential distribution has a right skewed unimodal density function and monotone hazard function similar to the density functions and hazard functions of the gamma and Weibull distributions. It is observed that it can be used quite e®ectively to analyze lifetime data in place of gamma, Weibull and lognormal distributions. The genesis of this model, several properties, di®erent estimation procedures and their properties, estimation of the stressstrength parameter, closeness of this distribution to some of the well known distribution functions are discussed in this article.
Laplace's method approximations for probabilistic inference in belief networks with continuous variables
 In de Mantaras
, 1994
"... Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior mom ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior moments and marginal posterior distributions with reasonable accuracy [errors are O(n,2) for posterior means] in many interesting cases. The method also seems promising for computing approximations for Bayes factors for use in the context of model selection, model uncertainty and mixtures of pdfs. The limitations, regularity conditions and computational di culties for the implementation of Laplace's method are comparable to those associated with the methods of maximum likelihood and posterior mode analysis. 1
Approximation for Bayesian ability estimation
 Journal of Educational Statistics
, 1988
"... An approximation is proposed for the posterior mean and standard deviation of the ability parameter in an item response model. The procedure assumes that approximations to the posterior mean and covariance matrix of item parameters are available. It is based on the posterior mean of a Taylor series ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
An approximation is proposed for the posterior mean and standard deviation of the ability parameter in an item response model. The procedure assumes that approximations to the posterior mean and covariance matrix of item parameters are available. It is based on the posterior mean of a Taylor series approximation to the posterior mean conditional on the item parameters. The method is illustrated for the twoparameter logistic model using data from an ACT math test with 39 items. A numerical comparison with the empirical Bayes method using n = 400 examinees shows that the point estimates are very similar but the standard deviations under empirical Bayes are about 2 % smaller than those under Bayes. Moreover, when the sample size is decreased to n = 100, the standard deviation under Bayes is shown to increase by 14 % in some cases. The standard procedure for measuring ability in item response theory is to first estimate the item parameters for a set of test items and then estimate ability treating the item parameters estimates as known true values. Such practice ignores a source of uncertainty and leads to errors. The point of this paper is to present a Bayesian method of dealing with this uncertainty and to suggest something about the nature and magnitude of the inferential errors. We consider estimating the ability of an individual based on the person's dichotomous responses to a set of test items whose characteristics are partially known through responses to the same set from other individuals belonging to the same population. Each item is characterized by an item response function which defines the probability of a correct response to the
Weighting for Unequal Probability of Selection in Multilevel Modeling
, 2004
"... In this note we construct an approximately unbiased multilevel pseudo maximum likelihood (MPML) estimation method for weighting in general multilevel models. We conduct a simulation study to determine the e®ect various factors have on the estimation method. The factors we included in this study are ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In this note we construct an approximately unbiased multilevel pseudo maximum likelihood (MPML) estimation method for weighting in general multilevel models. We conduct a simulation study to determine the e®ect various factors have on the estimation method. The factors we included in this study are scaling method, size of clusters, invariance of selection, and the informativeness of selection. The scaling method is an indicator of how the weights are normalized on each level. The invariance of the selection is a indicator of whether or not the same selection mechanism is applied across clusters. The informativeness of the selection is an indicator of how biased the selection is. We summarize our ¯ndings and recommend a multistage procedure based on the MPML method that can be used in practical applications. 2 1
Generalized Exponential Distribution: Bayesian Estimations
"... Recently twoparameter generalized exponential distribution has been introduced by the authors. In this paper we consider the Bayes estimators of the unknown parameters under the assumptions of gamma priors on both the shape and scale parameters. The Bayes estimators can not be obtained in explicit ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Recently twoparameter generalized exponential distribution has been introduced by the authors. In this paper we consider the Bayes estimators of the unknown parameters under the assumptions of gamma priors on both the shape and scale parameters. The Bayes estimators can not be obtained in explicit forms. Approximate Bayes estimators are computed using the idea of Lindley. We also propose Gibbs sampling procedure to generate samples from the posterior distributions and in turn computing the Bayes estimators. The approximate Bayes estimators obtained under the assumptions of noninformative priors, are compared with the maximum likelihood estimators using Monte Carlo simulations. One real data set has been analyzed for illustrative purposes.
Approximate Bayesian . . . Weighted Likelihood Bootstrap
, 1991
"... We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce the weighted likelihood bootstrap (WLB) as a simple way of approximately simulating from a posterior distribution. This is easy to implement, requiring only an algorithm for calculating the maximum likelihood estimator, such as the EM algorithm or iteratively reweighted least squares; it does not necessarily require actual calculation of the likelihood itself. The method is exact up to an effective prior which is generally unknown but can be identified exactly for unconstrained discretedata models and approximately for other models. Accuracy of the WLB relies on the chosen distribution of weights. In the generic scheme, the WLB is at least firstorder correct under quite general conditions. We have also been able to prove higherorder correctness in some classes of models. The method, which generalizes Rubin's Bayesian bootstrap, provides approximate posterior distributions for prediction, calibration, dependent data and partial likelihood problems, as well as more standard models. The calculation of approximate Bayes factors for model comparison is also considered. We note that, given a sample simulated from the posterior distribution, the required marginal likelihood may be simulationconsistently estimated by the harmonic mean of the associated likelihood values; a modification of this estimator that avoids instability is also noted. An alternative, predictionbased, estimator of the marginal likelihood using the WLB is also described. These methods provide simple ways of calculating approximate Bayes factors and posterior model probabilities for a very wide class of models.
CLASSICAL STATIC SYSTEM RELIABILITY AND ADJUSTED STATIC SYSTEM RELIABILITY WITH PRIOR & POSTERIOR VARIATIONS
"... This paper considers an important concept which suggested to take stock of the over estimation in reliability characteristics or under estimation of hazard rate. Using this concept, the study considers the analysis of the reliability characteristics of an exponential lifetime model when prior variat ..."
Abstract
 Add to MetaCart
(Show Context)
This paper considers an important concept which suggested to take stock of the over estimation in reliability characteristics or under estimation of hazard rate. Using this concept, the study considers the analysis of the reliability characteristics of an exponential lifetime model when prior variations in its parameters are suspected. Key Words: Robustness, adjustment factor, updated and predictive basic distributions 1.
GENERAL MULTILELEVEL MODELING WITH SAMPLING WEIGHTS
"... Key Words: multilevel pseudo maximum likelihood; sampling weights; multilevel models; multilevel mixture models; weights scaling; informative selection; In this article we study the approximately unbiased multilevel pseudo maximum likelihood (MPML) estimation method for general multilevel modeling ..."
Abstract
 Add to MetaCart
(Show Context)
Key Words: multilevel pseudo maximum likelihood; sampling weights; multilevel models; multilevel mixture models; weights scaling; informative selection; In this article we study the approximately unbiased multilevel pseudo maximum likelihood (MPML) estimation method for general multilevel modeling with sampling weights. We conduct a simulation study to determine the e®ect various factors have on the estimation method. The factors we included in this study are scaling method, size of clusters, invariance of selection, informativeness of selection, intraclass correlation and variability of standardized weights. The scaling method is an indicator of how the weights are normalized on each level. The invariance of the selection is an indicator of whether or not the same selection mechanism is applied across clusters. The informativeness of the selection is an indicator of how biased the selection is. We summarize our ¯ndings and recommend a multistage procedure based on the MPML method that can be used in practical applications. 1.