Results 1  10
of
70
Has the U.S. Economy Become More Stable? A Bayesian Approach Based on a MarkovSwitching Model of Business Cycle
, 1999
"... We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have been? 3) What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealin ..."
Abstract

Cited by 256 (13 self)
 Add to MetaCart
We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have been? 3) What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealing with structural break at an unknown changepoint in a Markovswitching model of business cycle. Empirical results suggest that there has been a structural break in U.S. real GDP growth toward more stabilization, with the posterior mode of the break date around 1984:1. Furthermore, we #nd a narrowing gap between growth rates during recessions and booms is at least as important as a decline in the volatility of shocks. Key Words: Bayes Factor, Gibbs sampling, Marginal Likelihood, MarkovSwitching, Stabilization, Structural Break. JEL Classi#cations: C11, C12, C22, E32. 1. Introduction In the literature, the issue of postwar stabilization of the U.S. economy relative to the prewar period has...
Measuring Business Cycles: A Modern Perspective
 The Review of Economics and Statistics
, 1996
"... Abstract: In the first half of this century, special attention was given to two features of the business cycle: the comovement of many individual economic series and the different behavior of the economy during expansions and contractions. Recent theoretical and empirical research has revived intere ..."
Abstract

Cited by 90 (11 self)
 Add to MetaCart
Abstract: In the first half of this century, special attention was given to two features of the business cycle: the comovement of many individual economic series and the different behavior of the economy during expansions and contractions. Recent theoretical and empirical research has revived interest in each attribute separately, and we survey this work. Notable empirical contributions are dynamic factor models that have a single common macroeconomic factor and nonlinear regimeswitching models of a macroeconomic aggregate. We conduct an empirical synthesis that incorporates both of these features. It is desirable to know the facts before attempting to explain them; hence, the attractiveness of organizing businesscycle regularities within a modelfree framework. During the first half of this century, much research was devoted to obtaining just such an empirical characterization of the business cycle. The most prominent example of this work
Nonlinear Gated Experts for Time Series: Discovering Regimes and Avoiding Overfitting
, 1995
"... this paper: ftp://ftp.cs.colorado.edu/pub/TimeSeries/MyPapers/experts.ps.Z, ..."
Abstract

Cited by 81 (5 self)
 Add to MetaCart
this paper: ftp://ftp.cs.colorado.edu/pub/TimeSeries/MyPapers/experts.ps.Z,
Understanding Instrumental Variables in Models with Essential Heterogeneity
 The Review of Economics and Statistics
, 2006
"... ..."
Dealing with Structural Breaks
 IN PALGRAVE HANDBOOK OF ECONOMETRICS
, 2006
"... This chapter is concerned with methodological issues related to estimation, testing and computation in the context of structural changes in the linear models. A central theme of the review is the interplay between structural change and unit root and on methods to distinguish between the two. The top ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
This chapter is concerned with methodological issues related to estimation, testing and computation in the context of structural changes in the linear models. A central theme of the review is the interplay between structural change and unit root and on methods to distinguish between the two. The topics covered are: methods related to estimation and inference about break dates for single equations with or without restrictions, with extensions to multiequations systems where allowance is also made for changes in the variability of the shocks; tests for structural changes including tests for a single or multiple changes and tests valid with unit root or trending regressors, and tests for changes in the trend function of a series that can be integrated or trendstationary; testing for a unit root versus trendstationarity in the presence of structural changes in the trend function; testing for cointegration in the presence of structural changes; and issues related to long memory and level shifts. Our focus is on the conceptual issues about the frameworks adopted and the assumptions imposed as they relate to potential applicability. We also highlight the potential problems that can occur with methods that are commonly used and recent work that has been done to overcome them.
THE SCIENTIFIC MODEL OF CAUSALITY
, 2005
"... Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’ ’ or ‘‘determinants’ ’ and (2) a manipulation where on ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’ ’ or ‘‘determinants’ ’ and (2) a manipulation where one (or more) of the ‘‘factors’ ’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’ ’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis
ℓ1 Trend Filtering
, 2007
"... The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on HodrickPrescott (HP) filtering, a widely used method for trend estimation. The proposed ℓ1 trend filtering method substitutes a sum of absolute values (i.e., ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on HodrickPrescott (HP) filtering, a widely used method for trend estimation. The proposed ℓ1 trend filtering method substitutes a sum of absolute values (i.e., an ℓ1norm) for the sum of squares used in HP filtering to penalize variations in the estimated trend. The ℓ1 trend filtering method produces trend estimates that are piecewise linear, and therefore is well suited to analyzing time series with an underlying piecewise linear trend. The kinks, knots, or changes in slope, of the estimated trend can be interpreted as abrupt changes or events in the underlying dynamics of the time series. Using specialized interiorpoint methods, ℓ1 trend filtering can be carried out with not much more effort than HP filtering; in particular, the number of arithmetic operations required grows linearly with the number of data points. We describe the method and some of its basic properties, and give some illustrative examples. We show how the method is related to ℓ1 regularization based methods in sparse signal recovery and feature selection, and list some extensions of the basic method.
2007): “Efficient Bayesian Inference for Multiple ChangePoint and Mixture Innovation Models,” forthcoming
 Journal of Business and Economic Statistics
"... Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a changepoint approach using Chib’s (1998) algorithm to draw the break dates. We outline some advantages of an alternative approach in which breaks come through mixture distributions in state innovation ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a changepoint approach using Chib’s (1998) algorithm to draw the break dates. We outline some advantages of an alternative approach in which breaks come through mixture distributions in state innovations, and for which the sampler of Gerlach, Carter and Kohn (2000) allows reliable and efficient inference. We show how the same sampler can be used to (i) model shifts in variance that occur independently of shifts in other parameters (ii) draw the break dates in O(n) rather than O(n 3) operations in the changepoint model of Koop and Potter (2004b), the most general to date. Finally, we introduce to the time series literature the concept of adaptive MetropolisHastings sampling for discrete latent variable models. We develop an easily implemented adaptive algorithm that improves on Gerlach et al. (2000) and promises to significantly reduce computing time in a variety of problems including mixture innovation, changepoint, regimeswitching, and outlier detection. The efficiency gains on two models for U.S. inflation and real interest rates are 257 % and 341%.
Program Evaluation and Research Designs
 of Handbook of Labor Economics, Elsevier, chapter 5
, 2011
"... This chapter provides a selective review of some contemporary approaches to program evaluation. One motivation for our review is the recent emergence and increasing use of a particular kind of “program ” in applied microeconomic research, the socalled Regression Discontinuity (RD) Design of Thistle ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This chapter provides a selective review of some contemporary approaches to program evaluation. One motivation for our review is the recent emergence and increasing use of a particular kind of “program ” in applied microeconomic research, the socalled Regression Discontinuity (RD) Design of Thistlethwaite and Campbell (1960). We organize our discussion of these various research designs by how they secure internal validity: in this view, the RD design can been seen as a close “cousin ” of the randomized experiment. An important distinction which emerges from our discussion of “heterogeneous treatment effects ” is between ex post (descriptive) and ex ante (predictive) evaluations; these two types of evaluations have distinct, but complementary goals. A second important distinction we make is between statistical statements that are descriptions of our knowledge of the program assignment process and statistical statements that are structural assumptions about individual behavior. Using these distinctions, we examine some commonly employed evaluation strategies, and assess them with a common set of criteria for “internal validity”, the foremost goal of an ex post evaluation. In some cases, we also provide some concrete illustrations of how internally valid causal estimates can be supplemented with specific structural assumptions to address “external validity”: the estimate from an internally valid "experimental " estimate can be viewed as a “leading term ” in an extrapolation for a parameter of interest in an ex ante evaluation.