Results 1  10
of
533
Estimation and Inference in Large Heterogeneous Panels with a Multifactor Error Structure
, 2004
"... This paper presents a new approach to estimation and inference in panel data models with a multifactor error structure where the unobserved common factors are (possibly) correlated with exogenously given individualspecific regressors, and the factor loadings differ over the cross section units. The ..."
Abstract

Cited by 368 (47 self)
 Add to MetaCart
This paper presents a new approach to estimation and inference in panel data models with a multifactor error structure where the unobserved common factors are (possibly) correlated with exogenously given individualspecific regressors, and the factor loadings differ over the cross section units. The basic idea behind the proposed estimation procedure is to filter the individualspecific regressors by means of (weighted) crosssection aggregates such that asymptotically as the crosssection dimension ( N) tends to infinity the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by OLS applied to an auxiliary regression where the observed regressors are augmented by (weighted) cross sectional averages of the dependent variable and the individual specific regressors. Two different but related problems are addressed: one that concerns the coefficients of the individualspecific regressors, and the other that focusses on the mean of the individual coefficients assumed random. In both cases appropriate estimators, referred to as common correlated effects (CCE) estimators, are proposed and their asymptotic distribution as N →∞, with T (the timeseries dimension) fixed or as N and T →∞(jointly) are derived under different regularity conditions. One important feature of the proposed CCE mean group (CCEMG) estimator is its invariance to the (unknown but fixed) number of unobserved common factors as N and T →∞(jointly). The small sample properties of the various pooled estimators are investigated by Monte Carlo experiments that confirm the theoretical derivations and show that the pooled estimators have generally satisfactory small sample properties even for relatively small values of N and T.
Dynamic Panel Estimation and Homogeneity Testing under CrossSection Dependence, Cowles Foundation Discussion Paper n.1362
, 2002
"... Least squares bias in autoregression and dynamic panel regression is shown to be exacerbated in case of cross section dependence. The bias is substantial and is shown to have serious effects in applications like HAC estimation and dynamic halflife response estimation. To address the bias problem, t ..."
Abstract

Cited by 165 (8 self)
 Add to MetaCart
Least squares bias in autoregression and dynamic panel regression is shown to be exacerbated in case of cross section dependence. The bias is substantial and is shown to have serious effects in applications like HAC estimation and dynamic halflife response estimation. To address the bias problem, this paper develops a panel approach to median unbiased estimation that takes into account cross section dependence. The new estimators given here considerably reduce the effects of bias and gain precision from estimating cross section error correlation. The paper also develops an asymptotic theory for tests of coefficient homogeneity under cross section dependence, and proposes a modiÞed Hausman test to test for the presence of homogeneous unit roots. An orthogonalization procedure is developed to remove cross section dependence and permit the use of conventional and meta unit root tests with panel data. Some simulations investigating the Þnite sample performance of the estimation and test procedures are reported.
Are more data always better for factor analysis
 Journal of Econometrics
, 2006
"... Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors a ..."
Abstract

Cited by 148 (0 self)
 Add to MetaCart
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors and that yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are crosscorrelated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 prescreened series often yield satisfactory or even better results than using all 147 series. Our simulation analysis is unique in that special attention is paid to crosscorrelated idiosyncratic errors, and we also allow the factors to have weak loadings on groups of series. It thus allows us to better understand the properties of the principal components estimator in empirical applications.
Monetary Policy in a Data Rich Environment
 Journal of Monetary Economics
, 2002
"... Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasi ..."
Abstract

Cited by 145 (3 self)
 Add to MetaCart
Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factormodel approach, developed by Stock and Watson (1999a,b), that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson’s result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to “realtime”) data. We estimate policy reaction functions for the Fed that take into account its datarich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an “expert system ” that could aggregate diverse information and provide benchmark policy settings. *Prepared for a conference on “Monetary Policy Under Incomplete Information”,
A PANIC Attack on Unit Roots and Cointegration
, 2003
"... This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Nonstationarity in Idiosyncratic and Common components’. PANIC consists of univariate and ..."
Abstract

Cited by 136 (3 self)
 Add to MetaCart
This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Nonstationarity in Idiosyncratic and Common components’. PANIC consists of univariate and panel tests with a number of novel features. It can detect whether the nonstationarity is pervasive, or variablespecific, or both. It tests the components of the data instead of the observed series. Inference is therefore more accurate when the components have different orders of integration. PANIC also permits the construction of valid panel tests even when crosssection correlation invalidates pooling of statistics constructed using the observed data. The key to PANIC is consistent estimation of the components even when the regressions are individually spurious. We provide a rigorous theory for estimation and inference. In Monte Carlo simulations, the tests have very good size and power. PANIC is applied to a panel of inflation series.
Panel Data Models with Interactive Fixed Effects
, 2005
"... This paper considers large N and large T panel data models with unobservable multiple interactive effects. These models are useful for both micro and macro econometric modelings. In earnings studies, for example, workers ’ motivation, persistence, and diligence combined to influence the earnings in ..."
Abstract

Cited by 117 (6 self)
 Add to MetaCart
This paper considers large N and large T panel data models with unobservable multiple interactive effects. These models are useful for both micro and macro econometric modelings. In earnings studies, for example, workers ’ motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, the interactive effects represent unobservable common shocks and their heterogeneous responses over cross sections. Since the interactive effects are allowed to be correlated with the regressors, they are treated as fixed effects parameters to be estimated along with the common slope coefficients. The model is estimated by the least squares method, which provides the interactiveeffects counterpart of the within estimator. We first consider model identification, and then derive the rate of convergence and the limiting distribution of the interactiveeffects estimator of the common slope coefficients. The estimator is shown to be √ NT consistent. This rate is valid even in the presence of correlations and heteroskedasticities in both dimensions, a striking contrast with fixed T framework in which serial correlation and heteroskedasticity imply unidentification. The asymptotic distribution is not necessarily centered at zero. Biased corrected estimators are derived. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. We also derive identification conditions for models with grand mean, timeinvariant regressors, and common regressors. It is shown that there exists a set of necessary and sufficient identification conditions for those models. Given identification, the rate of convergence and limiting results continue to hold. Key words and phrases: incidental parameters, additive effects, interactive effects, factor
Integer Factorization
, 2005
"... Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve fro ..."
Abstract

Cited by 113 (8 self)
 Add to MetaCart
Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve from an algorithmic point of view making it available to computer scientists for implementation. I have implemented the general number field sieve from this description and it is made publicly available from the Internet. This means that a reference implementation is made available for future developers which also can be used as a framework where some of the sub
Confidence intervals for diffusion index forecasts and inference for factoraugmented regressions
, 2003
"... We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components an ..."
Abstract

Cited by 102 (12 self)
 Add to MetaCart
We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factoraugmented regressions are √ T consistent and asymptotically normal if √ T/N → 0. The conditional mean predicted by the estimated factors is min [ √ T � √ N] consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors ” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.
The Generalized Dynamic Factor Model: onesided estimation and forecasting
"... This paper proposes a new forecasting method which makes use of information from a large panel of time series. As in Forni, Hallin, Lippi and Reichlin (2000), and in Stock and Watson (2002a,b), the method is based on a dynamic factor model. We argue that our method improves upon a standard principal ..."
Abstract

Cited by 99 (7 self)
 Add to MetaCart
This paper proposes a new forecasting method which makes use of information from a large panel of time series. As in Forni, Hallin, Lippi and Reichlin (2000), and in Stock and Watson (2002a,b), the method is based on a dynamic factor model. We argue that our method improves upon a standard principal component predictor in that, first, it fully exploits all the dynamic covariance structure of the panel and, second, it weights the variables according to their estimated signaltonoise ratio. We provide asymptotic results for our optimal forecast estimator and show that in finite samples our forecast outperforms the standard principal components predictor.