Results 1  10
of
186
Determining the Number of Factors in Approximate Factor Models
, 2000
"... In this paper we develop some statistical theory for factor models of large dimensions. The focus is the determination of the number of factors, which is an unresolved issue in the rapidly growing literature on multifactor models. We propose a panel Cp criterion and show that the number of factors c ..."
Abstract

Cited by 561 (30 self)
 Add to MetaCart
In this paper we develop some statistical theory for factor models of large dimensions. The focus is the determination of the number of factors, which is an unresolved issue in the rapidly growing literature on multifactor models. We propose a panel Cp criterion and show that the number of factors can be consistently estimated using the criterion. The theory is developed under the framework of large crosssections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criterion yields almost precise estimates of the number of factors for configurations of the panel data encountered in practice. The idea that variations in a large number of economic variables can be modelled bya small number of reference variables is appealing and is used in manyeconomic analysis. In the finance literature, the arbitrage pricing theory(APT) of Ross (1976) assumes that a small number of factors can be used to explain a large number of asset returns.
Are more data always better for factor analysis
 Journal of Econometrics
, 2006
"... Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors a ..."
Abstract

Cited by 151 (0 self)
 Add to MetaCart
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors and that yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are crosscorrelated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 prescreened series often yield satisfactory or even better results than using all 147 series. Our simulation analysis is unique in that special attention is paid to crosscorrelated idiosyncratic errors, and we also allow the factors to have weak loadings on groups of series. It thus allows us to better understand the properties of the principal components estimator in empirical applications.
Monetary Policy in a Data Rich Environment
 Journal of Monetary Economics
, 2002
"... Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasi ..."
Abstract

Cited by 149 (3 self)
 Add to MetaCart
Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factormodel approach, developed by Stock and Watson (1999a,b), that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson’s result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to “realtime”) data. We estimate policy reaction functions for the Fed that take into account its datarich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an “expert system ” that could aggregate diverse information and provide benchmark policy settings. *Prepared for a conference on “Monetary Policy Under Incomplete Information”,
A PANIC Attack on Unit Roots and Cointegration
, 2003
"... This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Nonstationarity in Idiosyncratic and Common components’. PANIC consists of univariate and ..."
Abstract

Cited by 142 (3 self)
 Add to MetaCart
This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Nonstationarity in Idiosyncratic and Common components’. PANIC consists of univariate and panel tests with a number of novel features. It can detect whether the nonstationarity is pervasive, or variablespecific, or both. It tests the components of the data instead of the observed series. Inference is therefore more accurate when the components have different orders of integration. PANIC also permits the construction of valid panel tests even when crosssection correlation invalidates pooling of statistics constructed using the observed data. The key to PANIC is consistent estimation of the components even when the regressions are individually spurious. We provide a rigorous theory for estimation and inference. In Monte Carlo simulations, the tests have very good size and power. PANIC is applied to a panel of inflation series.
Integer Factorization
, 2005
"... Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve fro ..."
Abstract

Cited by 123 (8 self)
 Add to MetaCart
Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve from an algorithmic point of view making it available to computer scientists for implementation. I have implemented the general number field sieve from this description and it is made publicly available from the Internet. This means that a reference implementation is made available for future developers which also can be used as a framework where some of the sub
Determining the number of primitive shocks in factor models
 Journal of Business and Economic Statistics
, 2007
"... A widely held but untested assumption underlying macroeconomic analysis is that the number of shocks driving economic fluctuations, q, is small. In this article we associate q with the number of dynamic factors in a large panel of data. We propose a methodology to determine q without having to estim ..."
Abstract

Cited by 96 (0 self)
 Add to MetaCart
A widely held but untested assumption underlying macroeconomic analysis is that the number of shocks driving economic fluctuations, q, is small. In this article we associate q with the number of dynamic factors in a large panel of data. We propose a methodology to determine q without having to estimate the dynamic factors. We first estimate a VAR in r static factors, where the factors are obtained by applying the method of principal components to a large panel of data, then compute the eigenvalues of the residual covariance or correlation matrix. We then test whether their eigenvalues satisfy an asymptotically shrinking bound that reflects sampling error. We apply the procedure to determine the number of primitive shocks in a large number of macroeconomic time series. An important aspect of the present analysis is to make precise the relationship between the dynamic factors and the static factors, which is a result of independent interest. KEY WORDS: Common shocks; Dynamic factor model; Number of factors; Principal components
Macroeconomic Forecasting in the Euro Area: Country SpeciÞc Versus AreaWide Information”, unpublished manuscript
, 2000
"... The challenge of forecasting aggregate European economic performance is gaining increasing importance. Euroarea inflation forecasts are needed to implement effectively the European Central Bank’s targets for inflation of the Euro. European integration also means that political and business decision ..."
Abstract

Cited by 84 (12 self)
 Add to MetaCart
The challenge of forecasting aggregate European economic performance is gaining increasing importance. Euroarea inflation forecasts are needed to implement effectively the European Central Bank’s targets for inflation of the Euro. European integration also means that political and business decisions increasingly depend on aggregate European real economic activity,
Understanding changes in international business cycle dynamics.
 Journal of the European Economic Association
, 2005
"... Abstract The volatility of economic activity in most G7 economies has moderated over the past 40 years. Also, despite large increases in trade and openness, G7 business cycles have not become more synchronized. After documenting these facts, we interpret G7 output data using a structural VAR that s ..."
Abstract

Cited by 82 (0 self)
 Add to MetaCart
(Show Context)
Abstract The volatility of economic activity in most G7 economies has moderated over the past 40 years. Also, despite large increases in trade and openness, G7 business cycles have not become more synchronized. After documenting these facts, we interpret G7 output data using a structural VAR that separately identifies common international shocks, the domestic effects of spillovers from foreign idiosyncratic shocks, and the effects of domestic idiosyncratic shocks. This analysis suggests that, with the exception of Japan, a significant portion of the widespread reduction in volatility is associated with a reduction in the magnitude of the common international shocks. Had the common international shocks in the 1980s and 1990s been as large as they were in the 1960s and 1970s, G7 business cycles would have been substantially more volatile and more highly synchronized than they actually were. (JEL: C3, E5)