Results 11  20
of
44
Integrated Conditional Moment Tests for Parametric Conditional Distributions, Working paper (http://econ.la.psu .edu/~hbierens/ICM_IID.PDF
, 2008
"... In this paper we propose a weighted integrated conditional moment (ICM) test of the validity of parametric specifications of conditional distribution models for stationary time series, by extending the weighted ICM test of Bierens (1984) for time series regression models to complete parametric condi ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper we propose a weighted integrated conditional moment (ICM) test of the validity of parametric specifications of conditional distribution models for stationary time series, by extending the weighted ICM test of Bierens (1984) for time series regression models to complete parametric conditional distribution specifications. Support for research within the Center for the Study of Auctions, Procurements, and
Integrated Conditional Moment Testing of Quantile Regression Models
 Empirical Economics
, 2001
"... In this paper we propose a consistent test of the linearity of quantile regression models, similar to the Integrated Conditional Moment (ICM) test of Bierens (1982) and Bierens and Ploberger (1997). This test requires reestimation of the quantile regression model by minimizing the ICM test statisti ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper we propose a consistent test of the linearity of quantile regression models, similar to the Integrated Conditional Moment (ICM) test of Bierens (1982) and Bierens and Ploberger (1997). This test requires reestimation of the quantile regression model by minimizing the ICM test statistic with respect to the parameters. We apply this ICM test to examine the correctness of the functional form of three median regression wage equations. Key words: Quantile regression; Test for linearity; Integrated conditional moment test; Wage equations
Parametric and Nonparametric Estimation of CovariateConditioned Average Effects
 UCSD DEPT. OF ECONOMICS DISCUSSION PAPER
, 2005
"... This paper unifies three complementary approaches to defining, identifying, and estimating causal effects: the classical structural equations approach of the Cowles Commision; the treatment effects framework of Rubin (1974) and Rosenbaum and Rubin (1983); and the Directed Acyclic Graph (DAG) approac ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
This paper unifies three complementary approaches to defining, identifying, and estimating causal effects: the classical structural equations approach of the Cowles Commision; the treatment effects framework of Rubin (1974) and Rosenbaum and Rubin (1983); and the Directed Acyclic Graph (DAG) approach of Pearl. The settable system framework nests these prior approaches, while affording significant improvements to each. For example, the settable system approach permits identification and estimation of causal effects without requiring exogenous instruments, generalizing the classical structural equations approach; it relaxes the stable unit treatment value assumption of the treatment effect approach and provides significant insight into the selection of covariates; and it accommodates mutual causality, generalizing the DAG approach. We provide necessary and sufficient conditions for identification of covariateconditioned average causal effects, parametric and nonparametric estimation results, and new tests for unconfoundedness.
Information in the revision process of realtime datasets
 Journal of Business and Economic Statistics
, 2009
"... In this paper we first develop two statistical tests of the null hypothesis that early release data are rational. The tests are consistent against generic nonlinear alternatives, and are conditional moment type tests, in the spirit of Bierens (1982,1990), Chao, Corradi and Swanson (2001) and Corradi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper we first develop two statistical tests of the null hypothesis that early release data are rational. The tests are consistent against generic nonlinear alternatives, and are conditional moment type tests, in the spirit of Bierens (1982,1990), Chao, Corradi and Swanson (2001) and Corradi and Swanson (2002). We then use this test, in conjunction with standard regression analysis in order to individually and jointly analyze a realtime dataset for money, output, prices and interest rates. All of our empirical analysisiscarriedoutusingvariousvariable/vintage combinations, allowing us to comment not only on rationality, but also on a number of other related issues. For example, we discuss and illustrate the importance of the choice between using first, later, or mixed vintages of data in prediction. Interestingly, it turns out that early release data are generally best predicted using first releases. The standard practice of using “mixed vintages ” of data appears to always yield poorer predictions, regardless of what we term “definitional change problems ” associated with using only first releases for prediction. Furthermore, we note that our tests of first release rationality based on ex ante prediction find no evidence that the data rationality null hypothesis is rejected for a variety of variables (i.e. we find strong evidence in favor of the “news ” hypothesis). Thus, it appears that there is little benefit to using later releases of data for prediction and policy analysis, for example. Additionally, we argue that the notion of finaldata is misleading, and that definitional and other methodological
2004), Some recent developments in predictive accuracy testing with nested models and (generic) nonlinear alternatives
 International Journal of Forecasting
"... Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often o ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Forecasters and applied econometricians are often interested in comparing the predictive accuracy of nested competing models. A leading example of a context in which competing models are nested is when predictive ability is equated with “outofsample Granger causality”. In particular, it is often of interest to assess whether historical data from one variable are useful when constructing a forecasting model for another variable, and hence our use of terminology such as “outofsample Granger causality ” (see e.g. Ashley, Granger and Schmalensee (1980)). In this paper we examine and discuss three key issues one is faced with when constructing predictive accuracy tests, namely: the contribution of parameter estimation error, the choice of linear versus nonlinear models, and the issue of (dynamic) misspecification, with primary focus on the latter of these issues. One of our main conclusions is that there are a number of easy to apply statistics constructed using out of sample conditional moment conditions which are robust to the presence of dynamic misspecification under both hypothesis. We provide some new Monte Carlo findings and empirical evidence based on the use of such tests. In particular, we analyze the finite sample properties of the consistent out of sample test of Corradi and Swanson (2002) using data generating processes calibrated with
TESTING THE MARTINGALE DIFFERENCE HYPOTHESIS USING INTEGRATED REGRESSION FUNCTIONS ∗
, 2003
"... This paper proposes an omnibus test for testing a generalized version of the martingale difference hypothesis (MDH). The generalized hypothesis includes the usual MDH or testing for conditional moments constancy such as conditional homoscedasticity (ARCH effects). Here we propose a unified approach ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper proposes an omnibus test for testing a generalized version of the martingale difference hypothesis (MDH). The generalized hypothesis includes the usual MDH or testing for conditional moments constancy such as conditional homoscedasticity (ARCH effects). Here we propose a unified approach for dealing with all of them. These hypotheses are long standing problems in econometric time series analysis, and typically have been tested using the sample autocorrelations or in the spectral domain using the periodogram. Since these hypotheses are not only about linear predictability, tests based on these statistics are inconsistent against uncorrelated processes in the alternative hypothesis. To circumvent this problem we use the pairwise integrated regression functions as measures of linear and nonlinear dependence. Our test is consistent against general pairwise Pitman’s local alternatives converging at the parametric rate and presents optimal power properties. There is no need to choose a lag order depending on sample size, to smooth the data or formulate a parametric alternative model. Moreover, our test is robust to higher order dependence, in particular to conditional heteroskedasticity. Under general dependence the asymptotic null distribution depends on the data generating process, so a bootstrap procedure is proposed and theoretically justified. A Monte Carlo study examines its finite sample performance and a final section investigates the martingale and conditional heteroskedasticity properties of the Pound/Dollar exchange rate.
2005): \Consistent Tests of Conditional Moment Restrictions." Forthcoming in Annales d'Economie et de Statistique
"... We address the issue of building consistent specification tests in econometric models defined through multiple conditional moment restrictions. In this aim, we extend the two methodologies developed for testing the parametric specification of a regression function to testing general conditional mome ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We address the issue of building consistent specification tests in econometric models defined through multiple conditional moment restrictions. In this aim, we extend the two methodologies developed for testing the parametric specification of a regression function to testing general conditional moment restrictions. Two classes of tests are proposed, which can be both interpreted as Mtests based on integrated conditional moment restrictions. The first class depends upon nonparametric functions that are estimated by kernel smoothers. The second type of test is built as a functional of a marked empirical process. For both tests, a simulation procedure for obtaining critical values is shown to be asymptotically valid. Comparison of finite sample performances of the tests are investigated by means of several MonteCarlo experiments.
SOME GENERICITY ANALYSES IN NONPARAMETRIC STATISTICS ‡
, 2002
"... Abstract. Many nonparametric estimators and tests are naturally set in infinite dimensional contexts. Prevalence is the infinite dimensional analogue of full Lebesgue measure, shyness the analogue of being a Lebesgue null set. A prevalent set of prior distributions lead to wildly inconsistent Bayesi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Many nonparametric estimators and tests are naturally set in infinite dimensional contexts. Prevalence is the infinite dimensional analogue of full Lebesgue measure, shyness the analogue of being a Lebesgue null set. A prevalent set of prior distributions lead to wildly inconsistent Bayesian updating when independent and identically distributed observations happen in class of infinite spaces that includes R n and N. For any rate of convergence, no matter how slow, only a shy set of target functions can be approximated by consistent nonparametric regression schemes in a class that includes series approximations, kernels and other locally weighted regressions, splines, and artificial neural networks. When the instruments allow for the existence of an instrumental regression, the regression function only exists for a shy set of dependent variables. The instruments allow for existence in a counterintuitive dense set of cases, shyness is an open question. A prevalent set of integrated conditional moment (ICM) specification tests are consistent, a dense subset of the finitely parametrized ICM tests are consistent, prevalence is an open question.
Consistent modelspecification tests based on parametric bootstrap
"... In this paper we establish consistent tests of L2type for the parametric functional form of the conditional mean of time series with values in R^d. A recent result on asymptotic distributions of Ustatistics of weakly dependent observations is invoked to obtain the limit distributions of the test s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we establish consistent tests of L2type for the parametric functional form of the conditional mean of time series with values in R^d. A recent result on asymptotic distributions of Ustatistics of weakly dependent observations is invoked to obtain the limit distributions of the test statistics. Since the asymptotic distributions depend on unknown parameters in a complicated way, we suggest to apply certain parametric bootstrap methods in order to determine critical values of the tests.
Testing Distributional Assumptions: A Lmoment Approach
"... Preliminary version. Comments are especially welcome Stein (1972, 1986) provides a flexible method for measuring the deviation of any probability distribution from a given distribution, thus effectively giving the upper bound of the approximation error which can be represented as the expectation of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Preliminary version. Comments are especially welcome Stein (1972, 1986) provides a flexible method for measuring the deviation of any probability distribution from a given distribution, thus effectively giving the upper bound of the approximation error which can be represented as the expectation of a Stein’s operator. Hosking (1990, 1992) proposes the concept of Lmoment which better summarizes the characteristics of a distribution than conventional moments (Cmoments). The purpose of the paper is to propose new tests for conditional parametric distribution functions with weakly dependent and strictly stationary data generating processes (DGP) by constructing a set of the Stein equations as the Lstatistics of conceptual ordered subsamples drawn from the population sample of distribution; hereafter are referred to as the Lmoment (GMLM) tests. The limiting distributions of our tests are nonstandard, depending on test criterion functions used in conditional Lstatistics restrictions; the covariance kernel in the tests reflects parametric dependence specification. The GMLM tests can resolve the choice of orthogonal polynomials remaining as an identification issue in the GMM tests using the Stein approximation (Bontemps