Results 1  10
of
30
Forecast Evaluation and Combination
 IN G.S. MADDALA AND C.R. RAO (EDS.), HANDBOOK OF STATISTICS
, 1996
"... It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and ..."
Abstract

Cited by 85 (24 self)
 Add to MetaCart
It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as: Are expectations rational? (e.g., Keane and Runkle, 1990; Bonham and Cohen, 1995) Are financial markets efficient? (e.g., Fama, 1970, 1991) Do macroeconomic shocks cause agents to revise their forecasts at all horizons, or just at short and mediumterm horizons? (e.g., Campbell and Mankiw, 1987; Cochrane, 1988) Are observed asset returns "too volatile"? (e.g., Shiller, 1979; LeRoy and Porter, 1981) Are asset returns forecastable over long horizons? (e.g., Fama and French, 1988; Mark, 1995)
An Indicator of Monthly GDP and an Early Estimate of quarterly GDP
 Economic Journal
, 2005
"... A range of monthly series are currently available giving indications of shortterm movements in output in the UK. The main aim of this paper is to suggest a formal and coherent procedure for grossing these monthly data up to represent the whole of GDP. Although the resultant estimates of GDP would b ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
A range of monthly series are currently available giving indications of shortterm movements in output in the UK. The main aim of this paper is to suggest a formal and coherent procedure for grossing these monthly data up to represent the whole of GDP. Although the resultant estimates of GDP would be worse than those obtained by direct measurement, they should be more satisfactory than simply making an informal inference from whatever monthly data are available. Our examination of the efficacy of the method for estimation of the state of economic activity indicates a rather satisfactory outcome. Macroeconomic policy making in real time faces the perennial problem of uncovering what is actually happening to the economy. Movements of seasonally adjusted real GDP (referred to subsequently simply as GDP) and related estimates of the output gap are widely regarded as important predictors of future inflation and thus are relevant to the problem of inflation targeting. Estimates of GDP are typically produced quarterly with the first estimates in the UK available about 25 days after the end of the quarter to which they relate. 1 In many countries including the UK monetary policy is set more frequently than quarterly and in
Unifying Political Methodology
, 1989
"... "political science statistics " (Rai and Blydenburgh 1973), "political statistics" ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
"political science statistics " (Rai and Blydenburgh 1973), "political statistics"
Spatial Data Analysis with GIS: An Introduction to Application in the Social Sciences
, 1992
"... ..."
Explaining Cointegration Analysis: Part I
, 1999
"... ‘Classical econometric theory assumes that observed data come from a stationary process, where means and variances are constant over time. Graphs of economic time series, and the historical record of economic forecasting, reveal the invalidity of such an assumption. Consequently, we discuss the imp ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
‘Classical econometric theory assumes that observed data come from a stationary process, where means and variances are constant over time. Graphs of economic time series, and the historical record of economic forecasting, reveal the invalidity of such an assumption. Consequently, we discuss the importance of stationarity for empirical modeling and inference; describe the effects of incorrectly assuming stationarity; explain the basic concepts of nonstationarity; note some sources of nonstationarity; formulate a class of nonstationary processes (autoregressions with unit roots) that seem empirically relevant for analyzing economic time series; and show when an analysis can be transformed by means of differencing and cointegrating combinations so stationarity becomes a reasonable assumption. We then describe how to test for unit roots and cointegration. Monte Carlo simulations and empirical examples illustrate the analysis.
The Error Term in the History of Time Series Econometrics.” Econometric Theory
, 2001
"... We argue that many methodological confusions in timeseries econometrics may be seen as arising out of ambivalence or confusion about the error terms. Relationships between macroeconomic time series are inexact and, inevitably, the early econometricians found that any estimated relationship would on ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We argue that many methodological confusions in timeseries econometrics may be seen as arising out of ambivalence or confusion about the error terms. Relationships between macroeconomic time series are inexact and, inevitably, the early econometricians found that any estimated relationship would only fit with errors. Slutsky interpreted these errors as shocks that constitute the motive force behind business cycles. Frisch tried to dissect further the errors into two parts: stimuli, which are analogous to shocks, and nuisance aberrations. However, he failed to provide a statistical framework to make this distinction operational. Haavelmo, and subsequent researchers at the Cowles Commission, saw errors in equations as providing the statistical foundations for econometric models, and required that they conform to a priori distributional assumptions specified in structural models of the general equilibrium type, later known as simultaneousequations models (SEM). Since theoretical models were at that time mostly static, the structural modelling strategy relegated the dynamics in timeseries data frequently to nuisance, atheoretical complications. Revival of the shock interpretation in theoretical models came about through the rational expectations movement and development of the VAR (Vector AutoRegression) modelling approach. The socalled LSE (London School of Economics) dynamic specification approach decomposes the dynamics of modelled variable into three parts: shortrun shocks, disequilibrium shocks and innovative residuals, with only the first two of these sustaining an economic interpretation.
A Review of Spatial Statistical Techniques for Location Studies
 Norwegian School of Economics and Business Administration
, 1998
"... While the new economic geography of trade and location has, understandably enough, concentrated on developing models of stylised relationships, it now seems that a review of some techniques which may be applied in empirical testing could prove useful. It is this task that will be approached here, co ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
While the new economic geography of trade and location has, understandably enough, concentrated on developing models of stylised relationships, it now seems that a review of some techniques which may be applied in empirical testing could prove useful. It is this task that will be approached here, conditioned by the advances taking place in new economic geography on the one hand, and in spatial data analysis on the other. Spatial data analysis ranges from the visualization and exploration of spatial data, through spatial statistics to spatial econometrics. The techniques involved are intended to explore for and demonstrate the presence of dependence between observations in space. Typically, observations are classified into three broad types: fields or surfaces with values at least theoretically observable over the whole study area, as in geostatistics, point patterns representing the occurrence of an observation, such as reported cases in epidemiology, and finally lattice observations, where attribute values adhere to a tesselation of the study area. This last form has much in common with time series studies, and shares a number of key testing techniques with econometrics. The paper reviews chosen techniques which can be applied in new economic geography. Point patterns, for instance, can be readily used to attempt to detect clustering. Lattice observations are used in the study of dynamic externalities, and consequently the effects of testing hypotheses based on spatial series should be examined. Finally, attention will be drawn to problems arising from spatial nonstationarity, when causal relationships may vary across space, and from the modifiable areal unit problem, when test results are influenced by the choice of spatial aggregation employed. 1
Conflict in Time and Space
, 1997
"... Scholars in international relations (IR) are increasingly using timeseries crosssection data to analyze models with a binary dependent variable (BTSCS models). IR scholars generally employ a simple logit/probit to analyze such data. This procedure is inappropriate if the data exhibit temporal or s ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Scholars in international relations (IR) are increasingly using timeseries crosssection data to analyze models with a binary dependent variable (BTSCS models). IR scholars generally employ a simple logit/probit to analyze such data. This procedure is inappropriate if the data exhibit temporal or spatial dependence. First, we discuss two estimation methods for modelling temporal dependence in BTSCS data: one promising approach is based on exact modelling of the underlying temporal process which determines the latent, continuous, dependent variable; The other, and easier to implement, depends on the formal equivalence of BTSCS and discrete duration data. Because the logit estimates a discrete hazard in a duration context, this method adds a smoothed time term to the logit estimation. Second, we discuss spatial or crosssectional issues, including robust standard errors and the modelling of effects. While it is not possible to use fixed effects in binary dependent variable panel models, such a strategy is feasible for IR BTSCS models. While not providing a model of spatial dependence, Huber's robust standard errors may well provide more accurate indications of parameter variability if the unit observations are intrarelated. We apply these recommended techniques to reanalyses of the relationship between (1) democracy, interdependence and peace (Oneal, Oneal, Maoz and Russett); and (2) security and the termination of interstate rivalry (Bennett). The techniques appear to perform well statistically. Substantively, while democratic dyads do appear to be more peaceful, trade relations, as measured by Oneal, et al., do not decrease the likelihood of particpation in militarized disputes. Bennett's principal finding regarding security and rivalry termination is confirmed; his f...
TimeSeries–CrossSection Methods
, 2006
"... Timeseries–crosssection (TSCS) data consist of comparable time series data observed on a variety of units. The paradigmatic applications are to the study of comparative political economy, where the units are countries (often the advanced industrial democracies) and where for each country we observ ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Timeseries–crosssection (TSCS) data consist of comparable time series data observed on a variety of units. The paradigmatic applications are to the study of comparative political economy, where the units are countries (often the advanced industrial democracies) and where for each country we observe annual data on a variety of political and economic variables. A standard question for such studies relates to the political determinants of economic outcomes and policies. There have now been hundreds of such studies published 1 TSCS data resemble “panel ” data, where a large number of units, who are almost invariably survey respondents, are observed over a small number of “waves ” (interviews). Any procedure that works well as the number of units gets large should work well for panel data; however, any procedure which depends on a large number of time points will not necessarily work well for panel data. TSCS data typically have the opposite structure of panel data: a relatively small number of units observed for some reasonable length of time. Thus methods that are appropriate for the analysis of panel data are not necessarily appropriate for TSCS data and vice versa. All of these types of data are a particular form of “multilevel ” (or “hierarchical”) data.