Results 1  10
of
448
Theory Refinement on Bayesian Networks
, 1991
"... Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced ..."
Abstract

Cited by 184 (5 self)
 Add to MetaCart
Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation ", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. 1 Introduction Theory refinement is the task of updating a domain theory in the light of...
Discrete Choice with Social Interactions

, 2000
"... This paper provides an analysis of aggregate behavioral outcomes when individual utility exhibits social interaction effects. We study generalized logistic models of individual choice which incorporate terms reflecting the desire of individuals to conform to the behavior of others in an environment ..."
Abstract

Cited by 184 (10 self)
 Add to MetaCart
This paper provides an analysis of aggregate behavioral outcomes when individual utility exhibits social interaction effects. We study generalized logistic models of individual choice which incorporate terms reflecting the desire of individuals to conform to the behavior of others in an environment of noncooperative decisionmaking. Laws of large numbers are generated in such environments. Multiplicity of
Analyzing Incomplete Political Science Data: An Alternative Algorithm for Multiple Imputation
 American Political Science Review
, 2000
"... We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through ..."
Abstract

Cited by 141 (40 self)
 Add to MetaCart
We propose a remedy for the discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. Methodologists and statisticians agree that "multiple imputation" is a superior approach to the problem of missing data scattered through one's explanatory and dependent variables than the methods currently used in applied data analysis. The reason for this discrepancy lies with the fact that the computational algorithms used to apply the best multiple imputation models have been slow, difficult to implement, impossible to run with existing commercial statistical packages, and demanding of considerable expertise. In this paper, we adapt an existing algorithm, and use it to implement a generalpurpose, multiple imputation model for missing data. This algorithm is considerably faster and easier to use than the leading method recommended in the statistics literature. We also quantify the risks of current missing data practices, ...
Analysis of multivariate probit models
 BIOMETRIKA
, 1998
"... This paper provides a practical simulationbased Bayesian and nonBayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the ..."
Abstract

Cited by 100 (6 self)
 Add to MetaCart
This paper provides a practical simulationbased Bayesian and nonBayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the EM algorithm. A practical approach for the computation of Bayes factors from the simulation output is also developed. The methods are applied to a dataset with a bivariate binary response, to a fouryear longitudinal dataset from the Six Cities study of the health effects of air pollution and to a sevenvariate binary response dataset on the labour supply of married women from the Panel Survey of Income Dynamics.
Benchmark Priors for Bayesian Model Averaging
 FORTHCOMING IN THE JOURNAL OF ECONOMETRICS
, 2001
"... In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequ ..."
Abstract

Cited by 94 (5 self)
 Add to MetaCart
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, “diffuse” priors on modelspecific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an “automatic” or “benchmark” prior structure that can be used in such cases. We focus on the Normal linear regression model with uncertainty in the choice of regressors. We propose a partly noninformative prior structure related to a Natural Conjugate gprior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (1995), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a “benchmark” prior specification in a linear regression context with model uncertainty.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 92 (17 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
An Analysis of Sample Attrition in Panel Data. The Michigan Panel Study on Income Dynamics
 Journal of Human Resources
, 1998
"... experienced approximately 50 percent sample loss from cumulative attrition from its initial 1968 membership. We study the effect of this attrition on the unconditional distributions of several socioeconomic variables and on the estimates of several sets of regression coefficients. We provide a stati ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
experienced approximately 50 percent sample loss from cumulative attrition from its initial 1968 membership. We study the effect of this attrition on the unconditional distributions of several socioeconomic variables and on the estimates of several sets of regression coefficients. We provide a statistical framework for conducting tests for attrition bias that draws a sharp distinction between selection on unobservables and on observables and that shows that weighted least squares can generate consistent parameter estimates when selection is based on observables, even when they are endogenous. Our empirical analysis shows that attrition is highly selective and is concentrated among lower socioeconomic status individuals. We also show that attrition is concentrated among those with more unstable earnings, marriage, and migration histories. Nevertheless, we find that these variables explain very little of the attrition in the sample, and that the selection that occurs is moderated by regressiontothemean effects
The bootstrap
 In Handbook of Econometrics
, 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract

Cited by 75 (1 self)
 Add to MetaCart
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the
Asymptotic theory for the GARCH(1, 1) quasimaximum likelihood estimator. Econometric Theory 10
, 1994
"... This paper investigates the sampling behavior of the quasimaximum likelihood estimator of the Gaussian GARCH(1, 1) model. The rescaled variable (the ratio of the disturbance to the conditional standard deviation) is not required to be Gaussian nor independent over time, in contrast to the current l ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
This paper investigates the sampling behavior of the quasimaximum likelihood estimator of the Gaussian GARCH(1, 1) model. The rescaled variable (the ratio of the disturbance to the conditional standard deviation) is not required to be Gaussian nor independent over time, in contrast to the current literature. The GARCH process may be integrated (a + a = 1), or even mildly explosive (a + f> 1). A bounded conditional fourth moment of the rescaled variable is sufficient for the results. Consistent estimation and asymptotic normality are demonstrated, as well as consistent estimation of the asymptotic covariance matrix. 1.
A Generalized Spatial TwoStage Least Squares Procedure for Estimating a Spatial Autoregressive Model with Autoregressive Disturbances
 Journal of Real Estate Finance and Economics
, 1998
"... Crosssectional spatial models frequently contain a spatial lag of the dependent variable as a regressor or a disturbance term that is spatially autoregressive. In this article we describe a computationally simple procedure for estimating crosssectional models that contain both of these characteris ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
Crosssectional spatial models frequently contain a spatial lag of the dependent variable as a regressor or a disturbance term that is spatially autoregressive. In this article we describe a computationally simple procedure for estimating crosssectional models that contain both of these characteristics. We also give formal largesample results. Key Words: Spatial autoregressive model, twostage least squares, generalized moments estimation 1.