Results 1  10
of
31
What to do about missing values in time series crosssection data
, 2009
"... Applications of modern methods for analyzing data with missing values, based primarily on multiple imputation, have in the last halfdecade become common in American politics and political behavior. Scholars in this subset of political science have thus increasingly avoided the biases and inefficien ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Applications of modern methods for analyzing data with missing values, based primarily on multiple imputation, have in the last halfdecade become common in American politics and political behavior. Scholars in this subset of political science have thus increasingly avoided the biases and inefficiencies caused by ad hoc methods like listwise deletion and best guess imputation. However, researchers in much of comparative politics and international relations, and others with similar data, have been unable to do the same because the best available imputation methods work poorly with the timeseries cross section data structures common in these fields. Weattempttorectify this situation with three related developments. First, we build a multiple imputation model that allows smooth time trends, shifts across crosssectional units, and correlations over time and space, resulting in far more accurate imputations. Second, we enable analysts to incorporate knowledge from area studies experts via priors on individual missing cell values, rather than on difficulttointerpret model parameters. Third, because these tasks could not be accomplished within existing imputation algorithms, in that they cannot handle as many variables as needed even in the simpler crosssectional data for which they were designed, we also develop a new algorithm that substantially expands the range of computationally feasible data types and sizes for which multiple imputation can be used. These developments also make it possible to implement the methods introduced here in freely available open source software that is considerably more reliable than existing algorithms. We develop an approach to analyzing data with
Maximum likelihood estimation via the ECM algorithm: Computing the asymptotic variance
, 1994
"... Abstract: This paper provides detailed theory, algorithms, and illustrations for computing asymptotic variancecovariance matrices for maximum likelihood estimates using the ECM algorithm (Meng and Rubin (1993)). This Supplemented ECM (SECM) algorithm is developed as an extension of the Supplemented ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract: This paper provides detailed theory, algorithms, and illustrations for computing asymptotic variancecovariance matrices for maximum likelihood estimates using the ECM algorithm (Meng and Rubin (1993)). This Supplemented ECM (SECM) algorithm is developed as an extension of the Supplemented EM (SEM) algorithm (Meng and Rubin (1991a)). Explicit examples are given, including one that demonstrates SECM, like SEM, has a powerful internal error detecting system for the implementation of the parent ECM or of SECM itself.
MAXIMUM LIKELIHOOD ESTIMATION FOR SOCIAL NETWORK DYNAMICS
 SUBMITTED TO THE ANNALS OF APPLIED STATISTICS
, 2009
"... A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuoustime Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The mo ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuoustime Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The model for tie changes is parametric and designed for applications to social network analysis, where the network dynamics can be interpreted as being generated by choices made by the social actors represented by the nodes of the graph. An algorithm for calculating the Maximum Likelihood estimator is presented, based on data augmentation and stochastic approximation. An application to an evolving friendship network is given and a small simulation study is presented which suggests that for small data sets the Maximum Likelihood estimator is more efficient than the earlier proposed Method of Moments estimator.
Median Regression and the Missing Information Principle
 Journal of Nonparametric Statistics
, 2001
"... Median regression analysis has robustness properties which make it an attractive alternative to regression based on the mean. In this paper, the missing information principle is applied to a rightcensored version of the median regression model, leading to a new estimator for the regression paramete ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Median regression analysis has robustness properties which make it an attractive alternative to regression based on the mean. In this paper, the missing information principle is applied to a rightcensored version of the median regression model, leading to a new estimator for the regression parameters. Our approach adapts Efron's derivation of selfconsistency for the KaplanMeier estimator to the context of median regression; we replace the least absolute deviation estimating function by its (estimated) conditional expectation given the data. The new estimator is shown to be asymptotically equivalent to an ad hoc estimator introduced by Ying, Jung and Wei, and to have improved moderatesample performance in simulation studies. Key words: Least absolute deviation, martingale, heteroscedasticity, counting processes, kernel conditional KaplanMeier estimator, Cox proportional hazards. 1 Introduction In survival analysis it is frequently of interest to estimate median life length at g...
Transposable Regularized Covariance Models with an Application to Missing Data Imputation
, 2008
"... Missing data estimation is an important challenge with highdimensional data arranged in the form of a matrix. Typically this data is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrixvariate no ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Missing data estimation is an important challenge with highdimensional data arranged in the form of a matrix. Typically this data is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrixvariate normal, the meanrestricted matrixvariate normal, in which the rows and columns each have a separate mean vector and covariance matrix. We extend regularized covariance models, which place an additive penalty on the inverse covariance matrix, to this distribution, by placing separate penalties on the covariances of the rows and columns. These so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and nonsingular covariance matrices. Using these models, we formulate EMtype algorithms for missing data imputation in both the multivariate and transposable frameworks. Exploiting the structure of our transposable models, we present techniques enabling use of our models with highdimensional data and give a computationally feasible onestep approximation for imputation. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. 1
An Evaluation of Some Approximate F Statistics and Their Small Sample Distributions for the Mixed Model with . . .
, 1987
"... The purpose of this work was to extend results from the General Linear Univariate Model and the General Linear Multivariate Model to special cases of the mixed model with linear covariance structure. These extensions were then used to motivate approximate F statistics for the mixed model. Three appr ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The purpose of this work was to extend results from the General Linear Univariate Model and the General Linear Multivariate Model to special cases of the mixed model with linear covariance structure. These extensions were then used to motivate approximate F statistics for the mixed model. Three approximate F statistics were proposed; one was based on the canonical form of the mixed model (FREML) and two were based on weighted least squares (F WLS ' F
A Practical Statistical Model for Multiparty Electoral Data
, 2000
"... Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least squares regression provides American politics researchers in that twoparty system. KK applied this model to threeparty elec ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least squares regression provides American politics researchers in that twoparty system. KK applied this model to threeparty elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here for improves on KK's (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easytouse software that i...
An Improved Statistical Model for Multiparty Electoral Data
 Paper presented at the Conference of Innovations in Comparative Methodology
, 2001
"... Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least squares regression provides American politics researchers in that twoparty system. Katz and King applied their model to t ..."
Abstract
 Add to MetaCart
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least squares regression provides American politics researchers in that twoparty system. Katz and King applied their model to threeparty elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we o#er here improves on Katz and King's (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also o#...