Results 1  10
of
35
Has the U.S. Economy Become More Stable? A Bayesian Approach Based on a MarkovSwitching Model of Business Cycle
, 1999
"... We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have been? 3) What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealin ..."
Abstract

Cited by 255 (13 self)
 Add to MetaCart
We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have been? 3) What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealing with structural break at an unknown changepoint in a Markovswitching model of business cycle. Empirical results suggest that there has been a structural break in U.S. real GDP growth toward more stabilization, with the posterior mode of the break date around 1984:1. Furthermore, we #nd a narrowing gap between growth rates during recessions and booms is at least as important as a decline in the volatility of shocks. Key Words: Bayes Factor, Gibbs sampling, Marginal Likelihood, MarkovSwitching, Stabilization, Structural Break. JEL Classi#cations: C11, C12, C22, E32. 1. Introduction In the literature, the issue of postwar stabilization of the U.S. economy relative to the prewar period has...
Computation and analysis of multiple structural change models
 Journal of Applied Econometrics
, 2003
"... In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. ..."
Abstract

Cited by 150 (4 self)
 Add to MetaCart
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most leastsquares operations of order O⊲T 2 ⊳ for any number of breaks. Our method can be applied to both pure and partial structural change models. Second, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS
Selection of estimation window in the presence of breaks
 Journal of Econometrics
, 2007
"... In situations where a regression model is subject to one or more breaks it is shown that it can be optimal to use prebreak data to estimate the parameters of the model used to compute outofsample forecasts. The issue of how best to exploit the tradeo that might exist between bias and forecast er ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
In situations where a regression model is subject to one or more breaks it is shown that it can be optimal to use prebreak data to estimate the parameters of the model used to compute outofsample forecasts. The issue of how best to exploit the tradeo that might exist between bias and forecast error variance is explored and illustrated for the multivariate regression model under the assumption of strictly exogenous regressors. In practice when this assumption cannot be maintained and both the time and size of the breaks are unknown the optimal choice of the observation window will be subject to further uncertainties that make exploiting the biasvariance tradeo di cult. To that end we propose a new set of crossvalidation methods for selection of a single estimation window and weighting or pooling methods for combination of forecasts based on estimation windows of di erent lengths. Monte Carlo simulations are used to show when these procedures work well compared with methods that ignore the presence of breaks. JEL Classi cations: C22, C53.
Stochastic Permanent Breaks
 Review of Economics and Statistics
, 1998
"... This paper aims to bridge the gap between processes where shocks are permanent and those with transitory shocks by formulating a process in which the long run impact of each innovation is time varying and stochastic. Frequent transitory shocks are supplemented by occasional permanent shifts. The sto ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
This paper aims to bridge the gap between processes where shocks are permanent and those with transitory shocks by formulating a process in which the long run impact of each innovation is time varying and stochastic. Frequent transitory shocks are supplemented by occasional permanent shifts. The stochastic permanent breaks (STOPBREAK) process is based on the premise that a shock is more likely to be permanent if it is large than if it is small. This formulation is motivated by a class of processes that undergo random structural breaks. Consistency and asymptotic normality of quasi maximum likelihood estimates is established and locally best hypothesis tests of the null of a random walk are developed. The model is applied to relative prices of pairs of stocks and significant test statistics result. KEYWORDS: Structural breaks, nonlinear moving average, unit roots, quasi maximum likelihood estimation, NeymanPearson testing, locally best test, temporary cointegration. 1. INTRODUCTION Time series analysts tend to draw a sharp line between processes where shocks have a permanent effect and those where they do not. The most notable example of this is the distinction between stationary AR(1) processes, where all shocks are transitory, and the random walk. As the autoregressive root approaches one, the rate at which shocks are expected to decay decreases, but they remain transitory. This paper aims to bridge the gap between transience and permanence by formulating a process in which the long run impact of each observation is time varying and stochastic. At one extreme all innovations are transitory and at the other, all shocks are permanent. 2
How Costly is it to Ignore Breaks when Forecasting the Direction of a Time Series?
, 2003
"... Empirical evidence suggests that many macroeconomic and financial time series are subject to occasional structural breaks. In this paper we present analytical results quantifying the effects of such breaks on the correlation between the forecast and the realization and on the ability to forecast ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Empirical evidence suggests that many macroeconomic and financial time series are subject to occasional structural breaks. In this paper we present analytical results quantifying the effects of such breaks on the correlation between the forecast and the realization and on the ability to forecast the sign or direction of a timeseries that is subject to breaks. Our results suggest that it can be very costly to ignore breaks. Forecasting approaches that condition on the most recent break are likely to perform better over unconditional approaches that use expanding or rolling estimation windows provided that the break is reasonably large.
Using Control Charts to Monitor Process and Product Profiles,” submitted to Journal of Quality Technology
, 2003
"... In most statistical process control (SPC) applications, it is assumed that the quality of a process or product can be adequately represented by the distribution of a univariate quality characteristic or by the general multivariate distribution of a vector consisting of several correlated quality cha ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
In most statistical process control (SPC) applications, it is assumed that the quality of a process or product can be adequately represented by the distribution of a univariate quality characteristic or by the general multivariate distribution of a vector consisting of several correlated quality characteristics. In many practical situations, however, the quality of a process or product is better characterized and summarized by a relationship between a response variable and one or more explanatory variables. Thus, at each sampling stage, one observes a collection of data points that can be represented by a curve (or profile). In some calibration applications, the profile can be represented adequately by a simple straightline model, while in other applications, more complicated models are needed. In this expository paper, we discuss some of the general issues involved in using control charts to monitor such process and productquality profiles and review the SPC literature on the topic. We relate this application to functional data analysis and review applications involving linear profiles, nonlinear profiles, and the use of splines and wavelets. We strongly
Learning and Shifts in LongRun Productivity Growth.” Federal Reserve Bank of San Francisco Working Paper No. 200404
, 2004
"... for comments on earlier versions of this paper. We also thank Kirk Moore for excellent research assistance and Judith Goff for editorial assistance. The views expressed herein are those of the authors and do not necessarily reflect those of the Board of Governors of the Federal Reserve System or the ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
for comments on earlier versions of this paper. We also thank Kirk Moore for excellent research assistance and Judith Goff for editorial assistance. The views expressed herein are those of the authors and do not necessarily reflect those of the Board of Governors of the Federal Reserve System or their staff, the Shifts in the longrun rate of productivity growth—such as those experienced by the U.S. economy in the 1970s and 1990s—are difficult, in real time, to distinguish from transitory fluctuations. In this paper, we analyze the evolution of forecasts of longrun productivity growth during the 1970s and 1990s and examine in the context of a dynamic general equilibrium model the consequences of gradual realtime learning on the responses to shifts in the longrun productivity growth rate. We find that a simple updating rule based on an estimated Kalman filter model using realtime data describes economists ’ longrun productivity growth forecasts during these periods extremely well. We then show that incorporating this process of learning has profound implications for the effects of shifts in trend productivity growth and can dramatically improve the model’s ability to generate responses that resemble historical experience.
Small Sample Properties of Forecasts from Autoregressive Models under Structural Breaks
 Journal of Econometrics
, 2005
"... This paper develops a theoretical framework for the analysis of smallsample properties of forecasts from general autoregressive models under structural breaks. Finitesample results for the mean squared forecast error of onestep ahead forecasts are derived, both conditionally and unconditionally, a ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
This paper develops a theoretical framework for the analysis of smallsample properties of forecasts from general autoregressive models under structural breaks. Finitesample results for the mean squared forecast error of onestep ahead forecasts are derived, both conditionally and unconditionally, and numerical results for different types of break specifications are presented. It is established that forecast errors are unconditionally unbiased even in the presence of breaks in the autoregressive coefficients and/or error variances so long as the unconditional mean of the process remains unchanged. Insights from the theoretical analysis are demonstrated in Monte Carlo simulations and on a range of macroeconomic time series from G7 countries. The results are used to draw practical recommendations for the choice of estimation window when forecasting from autoregressive models subject to breaks. JEL Classifications: C22, C53.
What’s Happened to the Phillips Curve
 FEDS Paper No. 199949, Board of Governors of the Federal Reserve System
, 1999
"... The simultaneous occurrence in the second half of the 1990s of low and falling price inflation and low unemployment appears to be at odds with the properties of a standard Phillips curve. We find this result in a model in which inflation depends on the unemployment rate, past inflation, and conventi ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The simultaneous occurrence in the second half of the 1990s of low and falling price inflation and low unemployment appears to be at odds with the properties of a standard Phillips curve. We find this result in a model in which inflation depends on the unemployment rate, past inflation, and conventional measures of price supply shocks. We show that, in such a model, long lags of past inflation are preferred to short lags, and that with long lags, the NAIRU is estimated precisely but is unstable in the 1990s. Two alternative modifications to the standard Phillips curve restore stability. One replaces the unemployment rate with capacity utilization. Although this change leads to more accurate inflation predictions in the recent period, the predictive ability of the utilization rate is not superior to that of the unemployment rate for the 1955 to 1998 sample as a whole. The second, and preferred, modification augments the standard Phillips curve to include an “errorcorrection ” mechanism involving the markup of prices over trend unit labor costs. With the markup relatively high through much of the 1990s, this channel is estimated to have held down inflation over this period, and thus provides an explanation of the recent low inflation.
Confidence Sets for the Date of a Single Break in Linear Time Series Regressions
, 2004
"... This paper considers the problem of constructing confidence sets for the date of a single break in a linear time series regression. We establish analytically and by small sample simulation that the currently standard method in econometrics to construct such confidence intervals has a coverage rate ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This paper considers the problem of constructing confidence sets for the date of a single break in a linear time series regression. We establish analytically and by small sample simulation that the currently standard method in econometrics to construct such confidence intervals has a coverage rate far below nominal levels when breaks are of moderate magnitude. Given that breaks of moderate magnitude are a theoretically and empirically a highly relevant phenomenon, we proceed to develop an appropriate alternative. We suggest constructing confidence sets by inverting a sequence of tests. Each of the tests maintains a specific break date under the null, and rejects when a break occurs elsewhere. By inverting a certain variant of a modified locally best invariant tests, we ensure that the asymptotic critical value does not depend on the maintained break date. A valid confidence set can hence be obtained by assessing